In [1]:
from IPython.display import HTML

HTML('''<script>
code_show=true; 
function code_toggle() {
 if (code_show){
 $('div.input').hide();
 } else {
 $('div.input').show();
 }
 code_show = !code_show
} 
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
Out[1]:
In [2]:
%load_ext autoreload
%autoreload 2
%reload_ext autoreload

%matplotlib inline

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.patches as mpatches
from mpl_toolkits.mplot3d import Axes3D

from sklearn.preprocessing import normalize
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn import metrics
from sklearn.cluster import KMeans
from sklearn.neighbors import KNeighborsClassifier


from keras.models import Sequential, Model
from keras.layers import Conv1D, Conv2D, Conv2DTranspose,MaxPooling1D, Dense, Dropout, Flatten, Input,Lambda
from keras.layers import LSTM, TimeDistributed
from keras import layers
from keras import backend as K

from pandas.plotting import autocorrelation_plot
from pandas import DataFrame
from statsmodels.tsa.arima_model import ARIMA
from sklearn.metrics import mean_squared_error
/usr/local/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Using TensorFlow backend.
/usr/local/lib/python3.6/site-packages/statsmodels/compat/pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead.
  from pandas.core import datetools

ES 201 Final Project: Stress Detection

Elizabeth Healey and Bobby Gonzalez

Outline

  1. Introduction
  2. Data Collection Process
  3. Data Visualization
  4. Classification

    i. Logistic Regression

    ii. Decision Tree

    iii. Convolutional Neural Network

  5. Latent and Generative Modeling

  6. Conclusion

1. Introduction

The goal of this project is to infer patterns of stress activity. We want to use biometric data gathered from human subjects to be able to identify cognitive stress and create a model for predicting whether a person is stressed or not based on biological signals.

We first conducted a literature survey to learn about state-of-the-art stress detection methods using biomedical data. We found that Electrodermal Skin Response (EDA) and Heart Rate[HR] are signals commonly used to detect responses in the autonomous nervous system due to stress [ Setz et al 2010]. Below shows an example of what an SCR looks like: SCRRRRRR.png

[Setz et al. 2010]

EDA signals have two different components commonly measured: Skin Conductive Response [SCR] and Skin Conductance Level [SCL]. SCR refer to short spikes in the EDA signal, directly related to the autonomous nervous system. SCL refers to the tonic level of the EDA signal, that rises and falls over time [Cho et. al 2017].

We found papers that use features extracted from these signals to classify stress. The paper [Cho et. al 2017] discusses extracting features from EDA and HR such as SCR amplitude, SCR average, HR variability. In this paper they were able to reach 95% accuracy in a controlled study. In a different study [Alexandratos et al.], they got only 79.7% accuracy in a controlled study just using features extracted from skin conductivity using Random Forest Classification. A third paper that we found [Setz et al 2010] was able to get over 82% accuracy in a similar controlled study using LDA. In all of these papers, features from the EDA signal were extracted before putting them through a classification model.

We wanted to try something different for our project. Our goal was to see if we could use a classification model directly on the raw EDA signal. So instead of extracting features from the EDA signal relating to amplitude, peak-to-peak interval, rise time, etc., we wanted to see if we could classify the signal by simply using the raw signal through a model that would learn the features on its own.

2. Data Collection Process

In our initial iterations, we attempted to collect our own EDA data with the Empatica band in hopes of detecting stressful states. However, we ran into difficulties in effectively generated labelled datasets, and these difficulties included finding the right activities representative of stressful and relaxed states as well as synchronizing schedules with peers to source data from them. Moreover, had we sourced our own data from the Empatica band, we would have sourced at 4 Hz/sec instead of the sampling rate of the 8 Hz/sec in our final dataset. We instead looked to source time-series, EDA datasets available openly online. We initially found the Amigos dataset (link - http://www.eecs.qmul.ac.uk/mmv/datasets/amigos/readme.html), but this dataset was not available publicly. Moreover, it consisted of participants labelling videos (with many more options than stressed and relaxed), and therefore this dataset did not fit the current scope of our project. A second dataset that we found on Github, posted openly for Columbia University's HealthHacks conference (https://github.com/health-hacks), included samples of EDA data signals sourced from the Empatica band and a corresponding Excel file with time markers for when the stressful and relaxing stimuli were employed. However, this set only included on the order of 10 examples, and we also found the time markers to be inconsistent.

The dataset that we decided on was data from the University of Texas Dallas [https://www.utdallas.edu/~nourani/Bioinformatics/Biosensor_Data/] . This dataset was from a study involving 20 subjects who would relax for 5 minutes, be exposed to 5 minutes of a cognitive stress-inducing activity, and then relax for 5 mintutes. The dataset included EDA data from 20 subjects with labels of "stress" and "relax". The EDA was sampled at 8Hz. The was the largest dataset we could find, as well as the most reliable.

3. Data Visualization

After finding a dataset, our first step in determining what techniques to use was to visualize the data. Below we took the data split it up into "stressed" and "relaxed" data for each subject. As mentioned previously, for each subject, there was a ~5minute period of relaxation, a ~5 minute period of coginitve stress, followed by a ~5 minute period of relaxation. We looked at these 3 periods of time for the 20 subjects. We put both portions of "relax" in the same category of "relaxed", even though the EDA signal differed. Our goal was to be able to detect the specific shape of the stress response.

Below, we plotted data for the 20 subjects. For each subject, there is a plot of the two "relaxed" portions (which differed from one another) and the "stressed" portion. Visually, it is clear that the "stressed" portions differ from the "relaxed" portions. The "stressed" portion contains a signature spike (SCR) that corresponds to reactions of the autonomous nervous system.

Our goal became to be able to predict whether or not a subject was in this state of cognitive stress, which is distinguished by the spikes.

Import Data

Below, each plot shows time on the X-axis, with the units being 1/8 of a second. This is because 8Hz was the sampling frequency. The Y axis is the EDA signal in uS.

Subject 1
In [3]:
data = 'Subject1AccTempEDA.csv'

sub_1 = pd.read_csv(data, delimiter = ",")

#print(sub_1)
eda = sub_1[['EDA']]
label = sub_1[['Label']]
label = label.values

eda = eda.values
shape = label.shape
part = label[:,0]
length = part.size

labels = np.zeros((length))


_1stress1 = []
_1stressE1 = []
_1stressE2 = []
_1relax1= []
_1relax2= []
_1relax3= []
_1relax4= []
rc = 0
sc = 0
Esc = 0
# Get labels 
for i in range(length):
    if label[[i]] == 'Relax':
        if i>1 and label[[i-1]] != label[[i]]:
            rc = rc+1
           
        labels[i] = 0
        if rc == 0:
            _1relax1.append(eda[i])
            if i>5000:
                print('hi')
        elif rc == 1:
            _1relax2.append(eda[i])
        elif rc == 2:
            _1relax3.append(eda[i])
        else:
            _1relax4.append(eda[i])
        
    elif label[[i]] == 'CognitiveStress':
        labels[i] = 1
        _1stress1.append( eda[i])
    elif label[[i]] == 'EmotionalStress':
        if i>100000 and label[[i-1]] != label[[i]]:
            Esc = Esc+1
        if Esc == 0:
            _1stressE1.append( eda[i])
        elif rc == 1:
            _1stressE2.append(eda[i])

        labels[i] = 3
    else:
        labels[i] = -1

#plt.plot(eda)
#plt.plot(labels)

plt.show()
plt.figure()
plt.title('Subject 1 Relaxed Segments')
plt.plot(_1relax1)
#plt.plot(_1relax2)
plt.plot(_1relax3)
#plt.plot(_1relax4)
plt.show()

plt.figure()
plt.title('Subject 1 Stressed Segment')
plt.plot(_1stress1)
#plt.plot(stressE1)
plt.plot(_1stressE2)
plt.show()
Subject 2
In [4]:
data = 'Subject2AccTempEDA.csv'

sub_1 = pd.read_csv(data, delimiter = ",")

#print(sub_1)
eda = sub_1[['EDA']]
label = sub_1[['Label']]
label = label.values

eda = eda.values
shape = label.shape
part = label[:,0]
length = part.size

labels = np.zeros((length))


_2stress1 = []
_2stressE1 = []
_2stressE2 = []
_2relax1= []
_2relax2= []
_2relax3= []
_2relax4= []
rc = 0
sc = 0
Esc = 0
# Get labels 
for i in range(length):
    if label[[i]] == 'Relax':
        if i>1 and label[[i-1]] != label[[i]]:
            rc = rc+1
            
        labels[i] = 0
        if rc == 0:
            _2relax1.append(eda[i])
            if i>5000:
                print('hi')
        elif rc == 1:
            _2relax2.append(eda[i])
        elif rc == 2:
            _2relax3.append(eda[i])
        else:
            _2relax4.append(eda[i])
        
    elif label[[i]] == 'CognitiveStress':
        labels[i] = 1
        _2stress1.append( eda[i])
    elif label[[i]] == 'EmotionalStress':
        if i>100000 and label[[i-1]] != label[[i]]:
            Esc = Esc+1
        if Esc == 0:
            _2stressE1.append( eda[i])
        elif rc == 1:
            _2stressE2.append(eda[i])

        labels[i] = 3
    else:
        labels[i] = -1

#plt.plot(eda)
#plt.plot(labels)

#plt.show()


plt.figure()
plt.title('Subject 2 Relaxed Segments')
plt.plot(_2relax1)
#plt.plot(_2relax2)
plt.plot(_2relax3)
#plt.plot(_2relax4)
plt.show()

plt.figure()
plt.title('Subject 2 Stressed Segment')
plt.plot(_2stress1)
#plt.plot(stressE1)
plt.plot(_2stressE2)
plt.show()
Subject 3
In [5]:
data = 'Subject3AccTempEDA.csv'

sub_1 = pd.read_csv(data, delimiter = ",")

#print(sub_1)
eda = sub_1[['EDA']]
label = sub_1[['Label']]
label = label.values

eda = eda.values
shape = label.shape
part = label[:,0]
length = part.size

labels = np.zeros((length))


_3stress1 = []
_3stressE1 = []
_3stressE2 = []
_3relax1= []
_3relax2= []
_3relax3= []
_3relax4= []
rc = 0
sc = 0
Esc = 0
# Get labels 
for i in range(length):
    if label[[i]] == 'Relax':
        if i>1 and label[[i-1]] != label[[i]]:
            rc = rc+1
            
        labels[i] = 0
        if rc == 0:
            _3relax1.append(eda[i])
            if i>5000:
                print('hi')
        elif rc == 1:
            _3relax2.append(eda[i])
        elif rc == 2:
            _3relax3.append(eda[i])
        else:
            _3relax4.append(eda[i])
        
    elif label[[i]] == 'CognitiveStress':
        labels[i] = 1
        _3stress1.append( eda[i])
    elif label[[i]] == 'EmotionalStress':
        if i>100000 and label[[i-1]] != label[[i]]:
            Esc = Esc+1
        if Esc == 0:
            _3stressE1.append( eda[i])
        elif rc == 1:
            _3stressE2.append(eda[i])

        labels[i] = 3
    else:
        labels[i] = -1

#print(length)
#plt.plot(eda)
#plt.plot(labels)

#plt.show()


plt.figure()
plt.title('Subject 3 Relaxed Segments')
plt.plot(_3relax1)
#plt.plot(_3relax2)
plt.plot(_3relax3)
#plt.plot(_3relax4)
plt.show()

plt.figure()
plt.title('Subject 3 Stress Segment')
plt.plot(_3stress1)
#plt.plot(stressE1)
#plt.plot(_3stressE2)
plt.show()
Subject 4
In [6]:
data = 'Subject4AccTempEDA.csv'

sub_1 = pd.read_csv(data, delimiter = ",")

#print(sub_1)
eda = sub_1[['EDA']]
label = sub_1[['Label']]
label = label.values

eda = eda.values
shape = label.shape
part = label[:,0]
length = part.size

labels = np.zeros((length))


_4stress1 = []
_4stressE1 = []
_4stressE2 = []
_4relax1= []
_4relax2= []
_4relax3= []
_4relax4= []
rc = 0
sc = 0
Esc = 0
# Get labels 
for i in range(length):
    if label[[i]] == 'Relax':
        if i>1 and label[[i-1]] != label[[i]]:
            rc = rc+1
        labels[i] = 0
        if rc == 0:
            _4relax1.append(eda[i])
            if i>5000:
                print('hi')
        elif rc == 1:
            _4relax2.append(eda[i])
        elif rc == 2:
            _4relax3.append(eda[i])
        else:
            _4relax4.append(eda[i])
        
    elif label[[i]] == 'CognitiveStress':
        labels[i] = 1
        _4stress1.append( eda[i])
    elif label[[i]] == 'EmotionalStress':
        if i>100000 and label[[i-1]] != label[[i]]:
            Esc = Esc+1
        if Esc == 0:
            _4stressE1.append( eda[i])
        elif Esc == 1:
            _4stressE2.append(eda[i])

        labels[i] = 3
    else:
        labels[i] = -1

#print(length)
#plt.plot(eda)
#plt.plot(labels)

#plt.show()


plt.figure()
plt.title('Subject 4 Relaxed Segments')
plt.plot(_4relax1)
#plt.plot(_4relax2)
plt.plot(_4relax3)
#plt.plot(_4relax4)
plt.show()

plt.figure()
plt.title('Subject 4 stressed Segment')


plt.plot(_4stress1)
#plt.plot(stressE1)
#plt.plot(_4stressE2)
plt.show()
Subject 5
In [7]:
data = 'Subject5AccTempEDA.csv'

sub_1 = pd.read_csv(data, delimiter = ",")

#print(sub_1)
eda = sub_1[['EDA']]
label = sub_1[['Label']]
label = label.values

eda = eda.values
shape = label.shape
part = label[:,0]
length = part.size

labels = np.zeros((length))


_5stress1 = []
_5stressE1 = []
_5stressE2 = []
_5relax1= []
_5relax2= []
_5relax3= []
_5relax4= []
rc = 0
sc = 0
Esc = 0
# Get labels 
for i in range(length):
    if label[[i]] == 'Relax':
        if i>1 and label[[i-1]] != label[[i]]:
            rc = rc+1
           
        labels[i] = 0
        if rc == 0:
            _5relax1.append(eda[i])
            if i>5000:
                print('hi')
        elif rc == 1:
            _5relax2.append(eda[i])
        elif rc == 2:
            _5relax3.append(eda[i])
        else:
            _5relax4.append(eda[i])
        
    elif label[[i]] == 'CognitiveStress':
        labels[i] = 1
        _5stress1.append( eda[i])
    elif label[[i]] == 'EmotionalStress':
        if i>100000 and label[[i-1]] != label[[i]]:
            Esc = Esc+1
        if Esc == 0:
            _5stressE1.append( eda[i])
        elif Esc == 1:
            _5stressE2.append(eda[i])

        labels[i] = 3
    else:
        labels[i] = -1

#plt.plot(eda)
#plt.plot(labels)

#plt.show()


plt.figure()
plt.title('Subject 5 Relaxed Segments')
plt.plot(_5relax1)
#plt.plot(_5relax2)
plt.plot(_5relax3)
#plt.plot(_5relax4)
plt.xlabel('seconds')
plt.ylabel('uS')
plt.show()

plt.figure()
plt.title('Subject 5 Stressed Segments')
plt.plot(_5stress1)
#plt.plot(stressE1)
plt.xlabel('seconds')
plt.ylabel('uS')
plt.show()
Subject 6
In [8]:
data = 'Subject6AccTempEDA.csv'
sub_1 = pd.read_csv(data, delimiter = ",")
eda = sub_1[['EDA']]
label = sub_1[['Label']]
label = label.values
eda = eda.values
shape = label.shape
part = label[:,0]
length = part.size
labels = np.zeros((length))
_6stress1 = []
_6stressE1 = []
_6stressE2 = []
_6relax1= []
_6relax2= []
_6relax3= []
_6relax4= []
rc = 0
sc = 0
Esc = 0
# Get labels 
for i in range(length):
    if label[[i]] == 'Relax':
        if i>1 and label[[i-1]] != label[[i]]:
            rc = rc+1
            
        labels[i] = 0
        if rc == 0:
            _6relax1.append(eda[i])
            if i>5000:
                print('hi')
        elif rc == 1:
            _6relax2.append(eda[i])
        elif rc == 2:
            _6relax3.append(eda[i])
        else:
            _6relax4.append(eda[i])      
    elif label[[i]] == 'CognitiveStress':
        labels[i] = 1
        _6stress1.append( eda[i])
    elif label[[i]] == 'EmotionalStress':
        if i>100000 and label[[i-1]] != label[[i]]:
            Esc = Esc+1
        if Esc == 0:
            _6stressE1.append( eda[i])
        elif Esc == 1:
            _6stressE2.append(eda[i])

        labels[i] = 3
    else:
        labels[i] = -1
#print(length)
#plt.plot(eda)
#plt.plot(labels)
#plt.show()
plt.figure()
plt.title('Subject 6 Relaxed Segments')
plt.plot(_6relax1)
#plt.plot(_6relax2)
plt.plot(_6relax3)
#plt.plot(_6relax4)
plt.show()
plt.figure()
plt.title('Subject 6 Stressed Segments')
plt.plot(_6stress1)
#plt.plot(stressE1)
plt.show()
Subject 7
In [9]:
data = 'Subject7AccTempEDA.csv'
sub_1 = pd.read_csv(data, delimiter = ",")
eda = sub_1[['EDA']]
label = sub_1[['Label']]
label = label.values
eda = eda.values
shape = label.shape
part = label[:,0]
length = part.size
labels = np.zeros((length))
_7stress1 = []
_7stressE1 = []
_7stressE2 = []
_7relax1= []
_7relax2= []
_7relax3= []
_7relax4= []
rc = 0
sc = 0
Esc = 0
# Get labels 
for i in range(length):
    if label[[i]] == 'Relax':
        if i>1 and label[[i-1]] != label[[i]]:
            rc = rc+1
            
        labels[i] = 0
        if rc == 0:
            _7relax1.append(eda[i])
            if i>5000:
                print('hi')
        elif rc == 1:
            _7relax2.append(eda[i])
        elif rc == 2:
            _7relax3.append(eda[i])
        else:
            _7relax4.append(eda[i])      
    elif label[[i]] == 'CognitiveStress':
        labels[i] = 1
        _7stress1.append( eda[i])
    elif label[[i]] == 'EmotionalStress':
        if i>100000 and label[[i-1]] != label[[i]]:
            Esc = Esc+1
        if Esc == 0:
            _7stressE1.append( eda[i])
        elif Esc == 1:
            _7stressE2.append(eda[i])

        labels[i] = 3
    else:
        labels[i] = -1
#print(length)
#plt.plot(eda)
#plt.plot(labels)
#plt.show()
plt.figure()
plt.title('Subject 7 Relaxed Segments')
plt.plot(_7relax1)
#plt.plot(_7relax2)
plt.plot(_7relax3)
#plt.plot(_7relax4)
plt.show()
plt.figure()
plt.title('Subject 7 Stressed Segments')


plt.plot(_7stress1)
#plt.plot(stressE1)
plt.show()
Subject 8
In [10]:
data = 'Subject8AccTempEDA.csv'
sub_1 = pd.read_csv(data, delimiter = ",")
eda = sub_1[['EDA']]
label = sub_1[['Label']]
label = label.values
eda = eda.values
shape = label.shape
part = label[:,0]
length = part.size
labels = np.zeros((length))
_8stress1 = []
_8stressE1 = []
_8stressE2 = []
_8relax1= []
_8relax2= []
_8relax3= []
_8relax4= []
rc = 0
sc = 0
Esc = 0
# Get labels 
for i in range(length):
    if label[[i]] == 'Relax':
        if i>1 and label[[i-1]] != label[[i]]:
            rc = rc+1
            
        labels[i] = 0
        if rc == 0:
            _8relax1.append(eda[i])
            if i>5000:
                print('hi')
        elif rc == 1:
            _8relax2.append(eda[i])
        elif rc == 2:
            _8relax3.append(eda[i])
        else:
            _8relax4.append(eda[i])      
    elif label[[i]] == 'CognitiveStress':
        labels[i] = 1
        _8stress1.append( eda[i])
    elif label[[i]] == 'EmotionalStress':
        if i>100000 and label[[i-1]] != label[[i]]:
            Esc = Esc+1
        if Esc == 0:
            _8stressE1.append( eda[i])
        elif Esc == 1:
            _8stressE2.append(eda[i])

        labels[i] = 3
    else:
        labels[i] = -1
#print(length)
#plt.plot(eda)
#plt.plot(labels)
#plt.show()
plt.figure()
plt.title('Subject 8 Relaxed Segments')


plt.plot(_8relax1)
#plt.plot(_8relax2)
plt.plot(_8relax3)
#plt.plot(_8relax4)
plt.show()
plt.figure()
plt.title('Subject 8 Stressed Segments')
plt.plot(_8stress1)
#plt.plot(stressE1)
plt.show()
Subject 9
In [11]:
data = 'Subject9AccTempEDA.csv'
sub_1 = pd.read_csv(data, delimiter = ",")
eda = sub_1[['EDA']]
label = sub_1[['Label']]
label = label.values
eda = eda.values
shape = label.shape
part = label[:,0]
length = part.size
labels = np.zeros((length))
_9stress1 = []
_9stressE1 = []
_9stressE2 = []
_9relax1= []
_9relax2= []
_9relax3= []
_9relax4= []
rc = 0
sc = 0
Esc = 0
# Get labels 
for i in range(length):
    if label[[i]] == 'Relax':
        if i>1 and label[[i-1]] != label[[i]]:
            rc = rc+1
            
        labels[i] = 0
        if rc == 0:
            _9relax1.append(eda[i])
            if i>5000:
                print('hi')
        elif rc == 1:
            _9relax2.append(eda[i])
        elif rc == 2:
            _9relax3.append(eda[i])
        else:
            _9relax4.append(eda[i])      
    elif label[[i]] == 'CognitiveStress':
        labels[i] = 1
        _9stress1.append( eda[i])
    elif label[[i]] == 'EmotionalStress':
        if i>100000 and label[[i-1]] != label[[i]]:
            Esc = Esc+1
        if Esc == 0:
            _9stressE1.append( eda[i])
        elif Esc == 1:
            _9stressE2.append(eda[i])

        labels[i] = 3
    else:
        labels[i] = -1
#plt.plot(eda)
#plt.plot(labels)
#plt.show()
plt.figure()
plt.title('Subject 9 Relaxed Segments')
plt.plot(_9relax1)
#plt.plot(_9relax2)
plt.plot(_9relax3)
#plt.plot(_9relax4)
plt.xlabel('samples')
plt.ylabel('uS')
plt.show()
plt.figure()
plt.title('Subject 9 Stressed Segments')


plt.plot(_9stress1)
plt.xlabel('samples')
plt.ylabel('uS')
#plt.plot(stressE1)
plt.show()
Subject 10
In [12]:
data = 'Subject10AccTempEDA.csv'
sub_1 = pd.read_csv(data, delimiter = ",")
eda = sub_1[['EDA']]
label = sub_1[['Label']]
label = label.values
eda = eda.values
shape = label.shape
part = label[:,0]
length = part.size
labels = np.zeros((length))
_10stress1 = []
_10stressE1 = []
_10stressE2 = []
_10relax1= []
_10relax2= []
_10relax3= []
_10relax4= []
rc = 0
sc = 0
Esc = 0
# Get labels 
for i in range(length):
    if label[[i]] == 'Relax':
        if i>1 and label[[i-1]] != label[[i]]:
            rc = rc+1
            
        labels[i] = 0
        if rc == 0:
            _10relax1.append(eda[i])
            if i>5000:
                print('hi')
        elif rc == 1:
            _10relax2.append(eda[i])
        elif rc == 2:
            _10relax3.append(eda[i])
        else:
            _10relax4.append(eda[i])      
    elif label[[i]] == 'CognitiveStress':
        labels[i] = 1
        _10stress1.append( eda[i])
    elif label[[i]] == 'EmotionalStress':
        if i>100000 and label[[i-1]] != label[[i]]:
            Esc = Esc+1
        if Esc == 0:
            _10stressE1.append( eda[i])
        elif Esc == 1:
            _10stressE2.append(eda[i])

        labels[i] = 3
    else:
        labels[i] = -1
#plt.plot(eda)
#plt.plot(labels)
#plt.show()
plt.figure()
plt.title('Subject 10 Relaxed Segments')
plt.plot(_10relax1)
#plt.plot(_10relax2)
plt.plot(_10relax3)
#plt.plot(_10relax4)
plt.show()
plt.figure()
plt.title('Subject 10 Stress Segments')
plt.plot(_10stress1)
#plt.plot(stressE1)
plt.show()
Subject 11
In [13]:
data = 'Subject11AccTempEDA.csv'
sub_1 = pd.read_csv(data, delimiter = ",")
eda = sub_1[['EDA']]
label = sub_1[['Label']]
label = label.values
eda = eda.values
shape = label.shape
part = label[:,0]
length = part.size
labels = np.zeros((length))
_11stress1 = []
_11stressE1 = []
_11stressE2 = []
_11relax1= []
_11relax2= []
_11relax3= []
_11relax4= []
rc = 0
sc = 0
Esc = 0

# Get labels 
for i in range(length):
    if label[[i]] == 'Relax':
        if i>1 and label[[i-1]] != label[[i]]:
            rc = rc+1
            
        labels[i] = 0
        if rc == 0:
            _11relax1.append(eda[i])
            if i>5000:
                print('hi')
        elif rc == 1:
            _11relax2.append(eda[i])
        elif rc == 2:
            _11relax3.append(eda[i])
        else:
            _11relax4.append(eda[i])      
    elif label[[i]] == 'CognitiveStress':
        labels[i] = 1
        _11stress1.append( eda[i])
    elif label[[i]] == 'EmotionalStress':
        if i>100000 and label[[i-1]] != label[[i]]:
            Esc = Esc+1
        if Esc == 0:
            _11stressE1.append( eda[i])
        elif Esc == 1:
            _11stressE2.append(eda[i])

        labels[i] = 3
    else:
        labels[i] = -1
#print(length)
#plt.plot(eda)
#plt.plot(labels)
#plt.show()
plt.figure()
plt.title('Subject 11 Relaxed Segments')
plt.plot(_11relax1)
#plt.plot(_11relax2)
plt.plot(_11relax3)
#plt.plot(_11relax4)
plt.show()
plt.figure()
plt.title('Subject 11 Stress Segments')
plt.plot(_11stress1)
#plt.plot(stressE1)
plt.show()
Subject 12
In [14]:
data = 'Subject12AccTempEDA.csv'
sub_1 = pd.read_csv(data, delimiter = ",")
eda = sub_1[['EDA']]
label = sub_1[['Label']]
label = label.values
eda = eda.values
shape = label.shape
part = label[:,0]
length = part.size
labels = np.zeros((length))
_12stress1 = []
_12stressE1 = []
_12stressE2 = []
_12relax1= []
_12relax2= []
_12relax3= []
_12relax4= []
rc = 0
sc = 0
Esc = 0
# Get labels 
for i in range(length):
    if label[[i]] == 'Relax':
        if i>1 and label[[i-1]] != label[[i]]:
            rc = rc+1
           
        labels[i] = 0
        if rc == 0:
            _12relax1.append(eda[i])
            if i>5000:
                print('hi')
        elif rc == 1:
            _12relax2.append(eda[i])
        elif rc == 2:
            _12relax3.append(eda[i])
        else:
            _12relax4.append(eda[i])      
    elif label[[i]] == 'CognitiveStress':
        labels[i] = 1
        _12stress1.append( eda[i])
    elif label[[i]] == 'EmotionalStress':
        if i>100000 and label[[i-1]] != label[[i]]:
            Esc = Esc+1
        if Esc == 0:
            _12stressE1.append( eda[i])
        elif Esc == 1:
            _12stressE2.append(eda[i])

        labels[i] = 3
    else:
        labels[i] = -1
#print(length)
#plt.plot(eda)
#plt.plot(labels)
#plt.show()
plt.figure()
plt.title('Subject 12 Relaxed Segments')
plt.plot(_12relax1)
#plt.plot(_12relax2)
plt.plot(_12relax3)
#plt.plot(_12relax4)
plt.show()
plt.figure()
plt.title('Subject 12 Stress Segment')
plt.plot(_12stress1)
#plt.plot(stressE1)
plt.show()
Subject 13
In [15]:
data = 'Subject13AccTempEDA.csv'
sub_1 = pd.read_csv(data, delimiter = ",")
eda = sub_1[['EDA']]
label = sub_1[['Label']]
label = label.values
eda = eda.values
shape = label.shape
part = label[:,0]
length = part.size
labels = np.zeros((length))
_13stress1 = []
_13stressE1 = []
_13stressE2 = []
_13relax1= []
_13relax2= []
_13relax3= []
_13relax4= []
rc = 0
sc = 0
Esc = 0
# Get labels 
for i in range(length):
    if label[[i]] == 'Relax':
        if i>1 and label[[i-1]] != label[[i]]:
            rc = rc+1
      
        labels[i] = 0
        if rc == 0:
            _13relax1.append(eda[i])
            if i>5000:
                print('hi')
        elif rc == 1:
            _13relax2.append(eda[i])
        elif rc == 2:
            _13relax3.append(eda[i])
        else:
            _13relax4.append(eda[i])      
    elif label[[i]] == 'CognitiveStress':
        labels[i] = 1
        _13stress1.append( eda[i])
    elif label[[i]] == 'EmotionalStress':
        if i>100000 and label[[i-1]] != label[[i]]:
            Esc = Esc+1
        if Esc == 0:
            _13stressE1.append( eda[i])
        elif Esc == 1:
            _13stressE2.append(eda[i])

        labels[i] = 3
    else:
        labels[i] = -1

plt.figure()
plt.title('Subject 13 Relaxed Segments')
plt.plot(_13relax1)
#plt.plot(_13relax2)
plt.plot(_13relax3)
#plt.plot(_13relax4)
plt.show()
plt.figure()
plt.title('Subject 13 Stress Segment')
plt.plot(_13stress1)
#plt.plot(stressE1)
plt.show()
Subject 14
In [16]:
data = 'Subject14AccTempEDA.csv'
sub_1 = pd.read_csv(data, delimiter = ",")
eda = sub_1[['EDA']]
label = sub_1[['Label']]
label = label.values
eda = eda.values
shape = label.shape
part = label[:,0]
length = part.size
labels = np.zeros((length))
_14stress1 = []
_14stressE1 = []
_14stressE2 = []
_14relax1= []
_14relax2= []
_14relax3= []
_14relax4= []
rc = 0
sc = 0
Esc = 0
# Get labels 
for i in range(length):
    if label[[i]] == 'Relax':
        if i>1 and label[[i-1]] != label[[i]]:
            rc = rc+1
           
        labels[i] = 0
        if rc == 0:
            _14relax1.append(eda[i])
            if i>5000:
                print('hi')
        elif rc == 1:
            _14relax2.append(eda[i])
        elif rc == 2:
            _14relax3.append(eda[i])
        else:
            _14relax4.append(eda[i])      
    elif label[[i]] == 'CognitiveStress':
        labels[i] = 1
        _14stress1.append( eda[i])
    elif label[[i]] == 'EmotionalStress':
        if i>100000 and label[[i-1]] != label[[i]]:
            Esc = Esc+1
        if Esc == 0:
            _14stressE1.append( eda[i])
        elif Esc == 1:
            _14stressE2.append(eda[i])

        labels[i] = 3
    else:
        labels[i] = -1

plt.figure()
plt.title('Subject 14 Relaxed Segments')
plt.plot(_14relax1)
#plt.plot(_14relax2)
plt.plot(_14relax3)
#plt.plot(_14relax4)
plt.show()
plt.figure()
plt.title('Subject 14 Stress Segment')
plt.plot(_14stress1)
#plt.plot(stressE1)
plt.show()
Subject 15
In [17]:
data = 'Subject15AccTempEDA.csv'
sub_1 = pd.read_csv(data, delimiter = ",")
eda = sub_1[['EDA']]
label = sub_1[['Label']]
label = label.values
eda = eda.values
shape = label.shape
part = label[:,0]
length = part.size
labels = np.zeros((length))
_15stress1 = []
_15stressE1 = []
_15stressE2 = []
_15relax1= []
_15relax2= []
_15relax3= []
_15relax4= []
rc = 0
sc = 0
Esc = 0
# Get labels 
for i in range(length):
    if label[[i]] == 'Relax':
        if i>1 and label[[i-1]] != label[[i]]:
            rc = rc+1
           
        labels[i] = 0
        if rc == 0:
            _15relax1.append(eda[i])
            if i>5000:
                print('hi')
        elif rc == 1:
            _15relax2.append(eda[i])
        elif rc == 2:
            _15relax3.append(eda[i])
        else:
            _15relax4.append(eda[i])      
    elif label[[i]] == 'CognitiveStress':
        labels[i] = 1
        _15stress1.append( eda[i])
    elif label[[i]] == 'EmotionalStress':
        if i>100000 and label[[i-1]] != label[[i]]:
            Esc = Esc+1
        if Esc == 0:
            _15stressE1.append( eda[i])
        elif Esc == 1:
            _15stressE2.append(eda[i])

        labels[i] = 3
    else:
        labels[i] = -1
plt.figure()
plt.title('Subject 15 Relaxed Segments')
plt.plot(_15relax1)
#plt.plot(_15relax2)
plt.plot(_15relax3)
#plt.plot(_15relax4)
plt.show()
plt.figure()
plt.title('Subject 15 Stress Segment')
plt.plot(_15stress1)
#plt.plot(stressE1)
plt.show()
Subject 16
In [18]:
data = 'Subject16AccTempEDA.csv'
sub_1 = pd.read_csv(data, delimiter = ",")
eda = sub_1[['EDA']]
label = sub_1[['Label']]
label = label.values
eda = eda.values
shape = label.shape
part = label[:,0]
length = part.size
labels = np.zeros((length))
_16stress1 = []
_16stressE1 = []
_16stressE2 = []
_16relax1= []
_16relax2= []
_16relax3= []
_16relax4= []
rc = 0
sc = 0
Esc = 0
# Get labels 
for i in range(length):
    if label[[i]] == 'Relax':
        if i>1 and label[[i-1]] != label[[i]]:
            rc = rc+1
   
        labels[i] = 0
        if rc == 0:
            _16relax1.append(eda[i])
            if i>5000:
                print('hi')
        elif rc == 1:
            _16relax2.append(eda[i])
        elif rc == 2:
            _16relax3.append(eda[i])
        else:
            _16relax4.append(eda[i])      
    elif label[[i]] == 'CognitiveStress':
        labels[i] = 1
        _16stress1.append( eda[i])
    elif label[[i]] == 'EmotionalStress':
        if i>100000 and label[[i-1]] != label[[i]]:
            Esc = Esc+1
        if Esc == 0:
            _16stressE1.append( eda[i])
        elif Esc == 1:
            _16stressE2.append(eda[i])

        labels[i] = 3
    else:
        labels[i] = -1

plt.figure()
plt.title('Subject 16 Relaxed Segments')
plt.plot(_16relax1)
#plt.plot(_16relax2)
plt.plot(_16relax3)
#plt.plot(_16relax4)
plt.show()
plt.figure()
plt.title('Subject 16 Stress Segments')
plt.plot(_16stress1)
#plt.plot(stressE1)
plt.show()
SUbject 17
In [19]:
data = 'Subject17AccTempEDA.csv'
sub_1 = pd.read_csv(data, delimiter = ",")
eda = sub_1[['EDA']]
label = sub_1[['Label']]
label = label.values
eda = eda.values
shape = label.shape
part = label[:,0]
length = part.size
labels = np.zeros((length))
_17stress1 = []
_17stressE1 = []
_17stressE2 = []
_17relax1= []
_17relax2= []
_17relax3= []
_17relax4= []
rc = 0
sc = 0
Esc = 0
# Get labels 
for i in range(length):
    if label[[i]] == 'Relax':
        if i>1 and label[[i-1]] != label[[i]]:
            rc = rc+1
            
        labels[i] = 0
        if rc == 0:
            _17relax1.append(eda[i])
            if i>5000:
                print('hi')
        elif rc == 1:
            _17relax2.append(eda[i])
        elif rc == 2:
            _17relax3.append(eda[i])
        else:
            _17relax4.append(eda[i])      
    elif label[[i]] == 'CognitiveStress':
        labels[i] = 1
        _17stress1.append( eda[i])
    elif label[[i]] == 'EmotionalStress':
        if i>100000 and label[[i-1]] != label[[i]]:
            Esc = Esc+1
        if Esc == 0:
            _17stressE1.append( eda[i])
        elif Esc == 1:
            _17stressE2.append(eda[i])

        labels[i] = 3
    else:
        labels[i] = -1

plt.figure()
plt.title('Subject 17 Relaxed Segments')


plt.plot(_17relax1)
#plt.plot(_17relax2)
plt.plot(_17relax3)
#plt.plot(_17relax4)
plt.show()
plt.figure()
plt.title('Subject 17 Stressed Segments')
plt.plot(_17stress1)
#plt.plot(stressE1)
plt.show()
Subject 18
In [20]:
data = 'Subject18AccTempEDA.csv'
sub_1 = pd.read_csv(data, delimiter = ",")
eda = sub_1[['EDA']]
label = sub_1[['Label']]
label = label.values
eda = eda.values
shape = label.shape
part = label[:,0]
length = part.size
labels = np.zeros((length))
_18stress1 = []
_18stressE1 = []
_18stressE2 = []
_18relax1= []
_18relax2= []
_18relax3= []
_18relax4= []
rc = 0
sc = 0
Esc = 0

for i in range(length):
    if label[[i]] == 'Relax':
        if i>1 and label[[i-1]] != label[[i]]:
            rc = rc+1
            
        labels[i] = 0
        if rc == 0:
            _18relax1.append(eda[i])
            if i>5000:
                print('hi')
        elif rc == 1:
            _18relax2.append(eda[i])
        elif rc == 2:
            _18relax3.append(eda[i])
        else:
            _18relax4.append(eda[i])      
    elif label[[i]] == 'CognitiveStress':
        labels[i] = 1
        _18stress1.append( eda[i])
    elif label[[i]] == 'EmotionalStress':
        if i>100000 and label[[i-1]] != label[[i]]:
            Esc = Esc+1
        if Esc == 0:
            _18stressE1.append( eda[i])
        elif Esc == 1:
            _18stressE2.append(eda[i])

        labels[i] = 3
    else:
        labels[i] = -1

plt.figure()
plt.title('Subject 18 Relaxed Segments')
plt.plot(_18relax1)
#plt.plot(_18relax2)
plt.plot(_18relax3)
#plt.plot(_18relax4)
plt.show()
plt.figure()
plt.title('Subject 18 Stressed Segment')
plt.plot(_18stress1)
#plt.plot(stressE1)
plt.show()
Subject 19
In [21]:
data = 'Subject19AccTempEDA.csv'
sub_1 = pd.read_csv(data, delimiter = ",")
eda = sub_1[['EDA']]
label = sub_1[['Label']]
label = label.values
eda = eda.values
shape = label.shape
part = label[:,0]
length = part.size
labels = np.zeros((length))
_19stress1 = []
_19stressE1 = []
_19stressE2 = []
_19relax1= []
_19relax2= []
_19relax3= []
_19relax4= []
rc = 0
sc = 0
Esc = 0
# Get labels 
for i in range(length):
    if label[[i]] == 'Relax':
        if i>1 and label[[i-1]] != label[[i]]:
            rc = rc+1
          
        labels[i] = 0
        if rc == 0:
            _19relax1.append(eda[i])
            if i>5000:
                print('hi')
        elif rc == 1:
            _19relax2.append(eda[i])
        elif rc == 2:
            _19relax3.append(eda[i])
        else:
            _19relax4.append(eda[i])      
    elif label[[i]] == 'CognitiveStress':
        labels[i] = 1
        _19stress1.append( eda[i])
    elif label[[i]] == 'EmotionalStress':
        if i>100000 and label[[i-1]] != label[[i]]:
            Esc = Esc+1
        if Esc == 0:
            _19stressE1.append( eda[i])
        elif Esc == 1:
            _19stressE2.append(eda[i])

        labels[i] = 3
    else:
        labels[i] = -1

plt.figure()
plt.title('Subject 19 Relaxed Segments')


plt.plot(_19relax1)
#plt.plot(_19relax2)
plt.plot(_19relax3)
#plt.plot(_19relax4)
plt.show()
plt.figure()
plt.title('Subject 19 Stressed Segment')
plt.plot(_19stress1)
#plt.plot(stressE1)
plt.show()
Subject 20
In [22]:
data = 'Subject20AccTempEDA.csv'
sub_1 = pd.read_csv(data, delimiter = ",")
eda = sub_1[['EDA']]
label = sub_1[['Label']]
label = label.values
eda = eda.values
shape = label.shape
part = label[:,0]
length = part.size
labels = np.zeros((length))
_20stress1 = []
_20stressE1 = []
_20stressE2 = []
_20relax1= []
_20relax2= []
_20relax3= []
_20relax4= []
rc = 0
sc = 0
Esc = 0
# Get labels 
for i in range(length):
    if label[[i]] == 'Relax':
        if i>1 and label[[i-1]] != label[[i]]:
            rc = rc+1
            
        labels[i] = 0
        if rc == 0:
            _20relax1.append(eda[i])
            if i>5000:
                print('hi')
        elif rc == 1:
            _20relax2.append(eda[i])
        elif rc == 2:
            _20relax3.append(eda[i])
        else:
            _20relax4.append(eda[i])      
    elif label[[i]] == 'CognitiveStress':
        labels[i] = 1
        _20stress1.append( eda[i])
    elif label[[i]] == 'EmotionalStress':
        if i>100000 and label[[i-1]] != label[[i]]:
            Esc = Esc+1
        if Esc == 0:
            _20stressE1.append( eda[i])
        elif Esc == 1:
            _20stressE2.append(eda[i])

        labels[i] = 3
    else:
        labels[i] = -1

plt.figure()
plt.title('Subject 20 Relaxed Segments')
plt.plot(_20relax1)
#plt.plot(_20relax2)
plt.plot(_20relax3)
#plt.plot(_20relax4)
plt.show()
plt.figure()
plt.title('Subject 20 Stressed Segment')
plt.plot(_20stress1)
#plt.plot(stressE1)
plt.show()

Data Processing

Sampling Frequency is 8Hz. Each 'Relaxed' portion was a little under 5 minutes for each subject. Each "Stressed" portion was a little over 5 minutes for each subject. Because of this, we took 2000 samples, sampled at 8Hz, from each "relaxed" portion and then 2500 samples, sampled at 8Hz, for each "stressed" sample.

Below is our step by step methodology for processing the data.

Step 1: Chunk Data

In [23]:
_1stress = np.vstack(_1stress1)
_2stress = np.vstack(_2stress1)
_3stress = np.vstack(_3stress1)
_4stress = np.vstack(_4stress1)
_5stress = np.vstack(_5stress1)
_6stress = np.vstack(_6stress1)
_7stress = np.vstack(_7stress1)
_8stress = np.vstack(_8stress1)
_9stress = np.vstack(_9stress1)
_10stress = np.vstack(_10stress1)
_11stress = np.vstack(_11stress1)
_12stress = np.vstack(_12stress1)
_13stress = np.vstack(_13stress1)
_14stress = np.vstack(_14stress1)
_15stress = np.vstack(_15stress1)
_16stress = np.vstack(_16stress1)
_17stress = np.vstack(_17stress1)
_18stress = np.vstack(_18stress1)
_19stress = np.vstack(_19stress1)
_20stress = np.vstack(_20stress1)
Dimension of Stress Matrix: 2500 x 20

Each subject had stressed data around 2800 samples. We chose to make them all of length 2500 samples (about 5 minutes)

In [24]:
s1 = _1stress[-2500:,:]
s2 = _2stress[-2500:,:]
s3 = _3stress[-2500:,:]
s4 = _4stress[-2500:,:]
s5 = _5stress[-2500:,:]
s6 = _6stress[-2500:,:]
s7 = _7stress[-2500:,:]
s8 = _8stress[-2500:,:]
s9 = _9stress[-2500:,:]
s10 = _10stress[-2500:,:]
s11 = _11stress[-2500:,:]
s12 = _12stress[-2500:,:]
s13 = _13stress[-2500:,:]
s14 = _14stress[-2500:,:]
s15 = _15stress[-2500:,:]
s16 = _16stress[-2500:,:]
s17 = _17stress[-2500:,:]
s18 = _18stress[-2500:,:]
s19 = _19stress[-2500:,:]
s20 = _20stress[-2500:,:]

s = np.concatenate([s1,s2,s3,s4,s5,s6,s7,s8,s9,s10,s11,s12,s13,s14,s15,s16,s17,s18,s19,s20], axis = 1)
plt.figure(figsize= (20,5))
plt.title('20 Subjects Stress')
plt.xlabel('Samples at Fs=8Hz')
plt.ylabel('uS')
plt.plot(s)
plt.show()

Subjects 3, 6, 11, 18, 19 had very poor EDA data for the stress period. The data here was of very small magnitudes. Our guess is that this was an error in how the sensor was worn.

Since we know the type of signals we are looking for to detect a fluctuation of the autonomous nervous system, we thought for our purposes it would be beneficial to remove these 5 subjects data.

In [25]:
st = np.concatenate([s1,s2,s4,s5,s7,s8,s9,s10,s12,s13,s14,s15,s16,s17,s20], axis = 1)
plt.figure(figsize= (20,5))
plt.title('15 Subjects Stress')
plt.xlabel('Samples at Fs=8Hz')
plt.ylabel('uS')
plt.plot(st)
plt.show()
Dimensions of Relaxed Data

For each subject, there were 2 different 5 minute intervals. One was the first 5 minutes, and the other was 5 minutes after the "stressed" portion.

"relax1" is the first 5 minutes

"relax3" is the 5 minutes after the period of stress

For each subject, I segmented them all into length 2000 samples

In [26]:
_1relax1 = np.vstack(_1relax1)
_2relax1 = np.vstack(_2relax1)
_3relax1 = np.vstack(_3relax1)
_4relax1 = np.vstack(_4relax1)
_5relax1 = np.vstack(_5relax1)
_6relax1 = np.vstack(_6relax1)
_7relax1 = np.vstack(_7relax1)
_8relax1 = np.vstack(_8relax1)
_9relax1 = np.vstack(_9relax1)
_10relax1 = np.vstack(_10relax1)
_11relax1 = np.vstack(_11relax1)
_12relax1 = np.vstack(_12relax1)
_13relax1 = np.vstack(_13relax1)
_14relax1 = np.vstack(_14relax1)
_15relax1 = np.vstack(_15relax1)
_16relax1 = np.vstack(_16relax1)
_17relax1 = np.vstack(_17relax1)
_18relax1 = np.vstack(_18relax1)
_19relax1 = np.vstack(_19relax1)
_20relax1 = np.vstack(_20relax1)
In [27]:
r1_1 = _1relax1[-2000:,:]
r1_2 = _2relax1[-2000:,:]
r1_3 = _3relax1[-2000:,:]
r1_4 = _4relax1[-2000:,:]
r1_5 = _5relax1[-2000:,:]
r1_6 = _6relax1[-2000:,:]
r1_7 = _7relax1[-2000:,:]
r1_8 = _8relax1[-2000:,:]
r1_9 = _9relax1[-2000:,:]
r1_10 = _10relax1[-2000:,:]
r1_11 = _11relax1[-2000:,:]
r1_12 = _12relax1[-2000:,:]
r1_13 = _13relax1[-2000:,:]
r1_14 = _14relax1[-2000:,:]
r1_15 = _15relax1[-2000:,:]
r1_16 = _16relax1[-2000:,:]
r1_17 = _17relax1[-2000:,:]
r1_18 = _18relax1[-2000:,:]
r1_19 = _19relax1[-2000:,:]
r1_20 = _20relax1[-2000:,:]

r1 = np.concatenate([r1_1,r1_2,r1_3,r1_4,r1_5,r1_6,r1_7,r1_8,r1_9,r1_10,r1_11,r1_12,r1_13,r1_14,r1_15,r1_16,r1_17,r1_18,r1_19,r1_20], axis = 1)
plt.figure(figsize= (20,5))
plt.title('20 Subjects Relax1')
plt.xlabel('Samples at Fs=8Hz')
plt.ylabel('uS')
plt.plot(r1)
plt.show()
In [28]:
_1relax3 = np.vstack(_1relax3)
_2relax3 = np.vstack(_2relax3)
_3relax3 = np.vstack(_3relax3)
_4relax3 = np.vstack(_4relax3)
_5relax3 = np.vstack(_5relax3)
_6relax3 = np.vstack(_6relax3)
_7relax3 = np.vstack(_7relax3)
_8relax3 = np.vstack(_8relax3)
_9relax3 = np.vstack(_9relax3)
_10relax3 = np.vstack(_10relax3)
_11relax3 = np.vstack(_11relax3)
_12relax3 = np.vstack(_12relax3)
_13relax3 = np.vstack(_13relax3)
_14relax3 = np.vstack(_14relax3)
_15relax3 = np.vstack(_15relax3)
_16relax3 = np.vstack(_16relax3)
_17relax3 = np.vstack(_17relax3)
_18relax3 = np.vstack(_18relax3)
_19relax3 = np.vstack(_19relax3)
_20relax3 = np.vstack(_20relax3)
In [29]:
r3_1 = _1relax3[-2000:,:]
r3_2 = _2relax3[-2000:,:]
r3_3 = _3relax3[-2000:,:]
r3_4 = _4relax3[-2000:,:]
r3_5 = _5relax3[-2000:,:]
r3_6 = _6relax3[-2000:,:]
r3_7 = _7relax3[-2000:,:]
r3_8 = _8relax3[-2000:,:]
r3_9 = _9relax3[-2000:,:]
r3_10 = _10relax3[-2000:,:]
r3_11 = _11relax3[-2000:,:]
r3_12 = _12relax3[-2000:,:]
r3_13 = _13relax3[-2000:,:]
r3_14 = _14relax3[-2000:,:]
r3_15 = _15relax3[-2000:,:]
r3_16 = _16relax3[-2000:,:]
r3_17 = _17relax3[-2000:,:]
r3_18 = _18relax3[-2000:,:]
r3_19 = _19relax3[-2000:,:]
r3_20 = _20relax3[-2000:,:]

r3 = np.concatenate([r3_1,r3_2,r3_3,r3_4,r3_5,r3_6,r3_7,r3_8,r3_9,r3_10,r3_11,r3_12,r3_13,r3_14,r3_15,r3_16,r3_17,r3_18,r3_19,r3_20], axis = 1)
plt.figure(figsize= (20,5))
plt.title('20 Subjects Relax3')
plt.xlabel('Samples at Fs=8Hz')
plt.ylabel('uS')
plt.plot(r3)
plt.show()

Set A: Segmented Data of Length 100 samples (~12 seconds) with Means removed

The first way we processed the data was by chunking each segmented into length 100 samples, which corresponds to roughly 12 seconds.

Below shows a few examples of some segments from each category
In [30]:
r1_flat = np.reshape(r1.T, 40000)

r1_data1 = np.reshape(r1_flat,(400,100))

r3_flat = np.reshape(r3.T, 40000)

r3_data1 = np.reshape(r3_flat,(400,100))

plt.figure(figsize= (20,5))
plt.title('Segmented Relaxed Examples')
plt.plot(r3_data1[0,:])
plt.plot(r3_data1[1,:])
plt.plot(r3_data1[2,:])
plt.plot(r3_data1[3,:])
plt.plot(r3_data1[4,:])
plt.plot(r3_data1[105,:])
plt.plot(r3_data1[300,:])
plt.plot(r3_data1[107,:])
plt.plot(r3_data1[108,:])
plt.plot(r3_data1[109,:])
plt.show()
In [31]:
s_flat = np.reshape(st.T, 37500)

s_data1 = np.reshape(s_flat,(375,100))

plt.figure(figsize= (20,5))
plt.title('Segmented Stress Examples')
plt.plot(s_data1[7,:])
plt.plot(s_data1[0,:])
plt.plot(s_data1[2,:])
plt.plot(s_data1[8,:])

plt.show()

Remove Means

Our next step in processing the data was to remove the means from each segment.

In [32]:
mean_s = np.mean(s_data1, axis=1)
mean_r1 = np.mean(r1_data1, axis=1)
mean_r3 = np.mean(r3_data1, axis=1)
In [33]:
mean_s_ = np.array([mean_s])
mean_r1_ = np.array([mean_r1])
mean_r3_ = np.array([mean_r3])

S_nm = s_data1-mean_s_.T
R1_nm = r1_data1-mean_r1_.T
R3_nm = r3_data1-mean_r3_.T
Visualize all of the data with removed means
In [34]:
plt.figure(figsize= (20,5))
plt.title('Stressed Segments of length 100 With Removed Means')
plt.plot(S_nm.T)
plt.show()

plt.figure(figsize= (20,5))
plt.title('Relaxed Segments of length 100 With Removed Means')
plt.plot(R1_nm.T)
plt.show()

Split into Training, Testing, and Validation

The total data size was 1175 x 100. Our next step was to split this data into training, testing, and validation for classification. We decided that 70% should be training, 15% should be validation, and 15% should be testing. We did not shuffle the data before splitting it up. So the testing data, for example, contained data from predominently 2 subjects. Our reasoning behind this was that the purpose of our project is to detect stress using a model on individuals by recognizing the shape of the SCRS. So theoretically, our model should be able to predict the stress state on any individual. In other words, the testing and validation data should be from completely different subjects.

Put 70% in training, 15% in Validation, 15% in Testing
In [35]:
R_all_nm = np.vstack([R1_nm,R3_nm])

X_train2 = np.zeros((820,100))
Y_train2 = np.zeros((820,1))
X_val2 = np.zeros((178,100))
Y_val2 = np.zeros((178,1))
X_test2 = np.zeros((177,100))
Y_test2 = np.zeros((177,1))


X_train2 = np.vstack([S_nm[:270,:], R_all_nm[:550,:]])
for i in range(400):
    Y_train2[i,0] = 1
X_val2 = np.vstack([S_nm[270:323,:], R_all_nm[550:675,:]])
for i in range(50):
    Y_val2[i,0] = 1
X_test2 = np.vstack([S_nm[323:,:], R_all_nm[675:800,:]])
for i in range(50):
    Y_test2[i,0] = 1
Look at Dimensions for dataset of length 100
In [36]:
print('X Train Shape:')
print(X_train2.shape)
print('y Train Shape:')
print(Y_train2.shape)
print('X Val Shape:')
print(X_val2.shape)
print('y Val Shape:')
print(Y_val2.shape)
print('X Test Shape:')
print(X_test2.shape)
print('y Test Shape:')
print(Y_test2.shape)
X Train Shape:
(820, 100)
y Train Shape:
(820, 1)
X Val Shape:
(178, 100)
y Val Shape:
(178, 1)
X Test Shape:
(177, 100)
y Test Shape:
(177, 1)

Set B: Segmented into longer portions each of length 500 Samples (~1 minute) with Means subtracted

Due to the fact that our filter size is roughly 40, we thought that it might be worthwhile to increase the length of the data by segmenting every minute instead of 10 seconds. This way, the shape of the SCRs might be picked up better by the filter. The downside of this is that there is 5 times less data than the previous set. However, this is a tradeoff that we thought was worthwhile to explore.

In [37]:
r1_flat = np.reshape(r1.T, 40000)

r1_500 = np.reshape(r1_flat,(80,500))

r3_flat = np.reshape(r3.T, 40000)

r3_500 = np.reshape(r3_flat,(80,500))

s_flat = np.reshape(st.T, 37500)

s_500_g = np.reshape(s_flat,(75,500))
In [38]:
mean_s = np.mean(s_500_g, axis=1)

mean_r1 = np.mean(r1_500, axis=1)

mean_r3 = np.mean(r3_500, axis=1)

mean_s_ = np.array([mean_s])
mean_r1_ = np.array([mean_r1])
mean_r3_ = np.array([mean_r3])

S_nm_500_g = s_500_g-mean_s_.T
R1_nm_500 = r1_500-mean_r1_.T
R3_nm_500 = r3_500-mean_r3_.T
Visualize Segments of length 500 with removed means
In [39]:
plt.figure(figsize= (20,5))
plt.title('Stressed Segments')
plt.plot(S_nm_500_g.T)
plt.show()

plt.figure(figsize= (20,5))
plt.title('Relaxed Segments')
plt.plot(R1_nm_500.T)
plt.show()
Portion 70% training, 15% validation, 15% testing
In [40]:
R_all_nm_500 = np.vstack([R1_nm_500,R3_nm_500])

X_train2_500 = np.zeros((160,100))
Y_train2_500 = np.zeros((160,1))
X_val2_500 = np.zeros((38,100))
Y_val2_500 = np.zeros((38,1))
X_test2_500 = np.zeros((37,100))
Y_test2_500 = np.zeros((37,1))

X_train2_500 = np.vstack([S_nm_500_g[:50,:], R_all_nm_500[:110,:]])
for i in range(80):
    Y_train2_500[i,0] = 1
X_val2_500 = np.vstack([S_nm_500_g[50:63,:], R_all_nm_500[110:135,:]])
for i in range(10):
    Y_val2_500[i,0] = 1
X_test2_500 = np.vstack([S_nm_500_g[63:,:], R_all_nm_500[135:160,:]])
for i in range(10):
    Y_test2_500[i,0] = 1
Look at Dimensions for dataset of length 500
In [41]:
print('X Train Shape:')
print(X_train2_500.shape)
print('y Train Shape:')
print(Y_train2_500.shape)
print('X Val Shape:')
print(X_val2_500.shape)
print('y Val Shape:')
print(Y_val2_500.shape)
print('X Test Shape:')
print(X_test2_500.shape)
print('y Test Shape:')
print(Y_test2_500.shape)
X Train Shape:
(160, 500)
y Train Shape:
(160, 1)
X Val Shape:
(38, 500)
y Val Shape:
(38, 1)
X Test Shape:
(37, 500)
y Test Shape:
(37, 1)

Set C: Normalize Data Set of Length 500 with means subtracted

Aside from subtracting the mean and altering the length of the data, we also tried normalizing each segment, to account for any discrepancies due to magnitude differences from person to person. Below is a visual of some of the normalized stressed data. As seen, the SCRs are pretty consistent from sample to sample in shape.

However, the problem with normalizing segments to accound for magnitude differences in SCRs from person to person is that the "relaxed" samples would also have to be normalized (in order to be able to make predictions on a completely new unknown dataset) However, in doing so, any noise from the relaxed segments gets amplified. When we tried running classifiers on it, we had poor results for every type of classification method.

The other thing we thought about doing was normalizing each subjects data (relaxed and stressed together)before segmenting. However, this would prohibit real-time predictions. To allow for multiple applications, we thought it would be best to not normalize the data and it would be interesting to see if we could still classify data the varied in magnitude from person to person by only removing the means.

In [42]:
normed_X_train = normalize(X_train2_500, axis=1, norm='l1')
normed_X_val = normalize(X_val2_500, axis=1, norm='l1')
normed_X_test = normalize(X_test2_500, axis=1, norm='l1')
Visualize Some of Normed Data
In [43]:
plt.figure(figsize= (20,5))
plt.title('Segmented Normalized  Examples')
plt.plot(normed_X_train[0,:])
plt.plot(normed_X_train[1,:])
plt.plot(normed_X_train[2,:])
plt.plot(normed_X_train[3,:])
plt.plot(normed_X_train[40,:])

plt.show()
In [44]:
print('X Train Shape:')
print(normed_X_train.shape)
print('y Train Shape:')
print(Y_train2_500.shape)
print('X Val Shape:')
print(normed_X_val.shape)
print('y Val Shape:')
print(Y_val2_500.shape)
print('X Test Shape:')
print(normed_X_test.shape)
print('y Test Shape:')
print(Y_test2_500.shape)
X Train Shape:
(160, 500)
y Train Shape:
(160, 1)
X Val Shape:
(38, 500)
y Val Shape:
(38, 1)
X Test Shape:
(37, 500)
y Test Shape:
(37, 1)

Apply Binary Classification Techniques

Our intuition suggested that a convolutional neural network may be a good approach. This is due to the time-invariant nature of the SCRs within a sample. It is also due to the fact that SCRs vary from person to person in magnitude, though the shape is pretty consistent. The idea is that a CNN would be able to pick up on these patterns, given enough training data. We know that CNNs are useful in image recognition for picking up on textures and patterns given images where the thing to be classified is positioned in different ways in the photo. (We follow-up on these themes in the data augmentation section). Along the same lines, we thought a CNN would be able to pick up SCRs that differ in positioning and magnitude from person to person.

To set a baseline, we wanted to explore simpler classification techniques on our dataset as a means to later compare our results using a CNN. The two baseline classifiers we looked at were logistic regression and decision trees.

Since we are doing binary classification, we chose to look at logistic regression as this is generally always useful in predicting whether or not data belongs to a class. Because our relaxed data and stressed data did differ greatly, due to the fact that stressed data normally has more variance, we hypothesized that the logistic regression model would likely be able to do okay in classifying our data to a certain extent.

We also wanted to use decision trees as a baseline because it is another well-known binary classifier. Due to the time invariant nature of SCRs, we did not think that a decision tree or logistic regression model would be able to classify our data as well as a CNN. We also did not have a strong intuition into what our decision boundary would look like in the input's 500-dim feature space but thought that a neural network would be necessary to transform our input into a parameter subspace and then to draw a decision boundary between the two classes in that space.

Logistic Regression

Set A - Data segmented in length 100 (12.5 second segments)

In [45]:
logisticA = LogisticRegression()
logisticA.fit(X_train2, Y_train2)
/usr/local/lib/python3.6/site-packages/sklearn/utils/validation.py:578: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
  y = column_or_1d(y, warn=True)
Out[45]:
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
          intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,
          penalty='l2', random_state=None, solver='liblinear', tol=0.0001,
          verbose=0, warm_start=False)
In [46]:
# Use score method to get accuracy of model
score = logisticA.score(X_test2, Y_test2)
print('Score on Test Set:')
print(score)
pred = logisticA.predict(X_test2)
cm = metrics.confusion_matrix(Y_test2, pred)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt=".3f", linewidths=.5, square = True, cmap = 'Blues_r');
plt.ylabel('Actual label');
plt.xlabel('Predicted label');
all_sample_title = 'Accuracy Score Log Regression Test data (Set B): {0}'.format(score)
plt.title(all_sample_title, size = 10);

score = logisticA.score(X_val2, Y_val2)
print(' ')
print('Score on Validation Set:')
print(score)
pred = logisticA.predict(X_val2)
cm = metrics.confusion_matrix(Y_val2, pred)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt=".3f", linewidths=.5, square = True, cmap = 'Blues_r');
plt.ylabel('Actual label');
plt.xlabel('Predicted label');
all_sample_title = 'Accuracy Score Log Regression Val data (Set B): {0}'.format(score)
plt.title(all_sample_title, size = 10);
Score on Test Set:
0.847457627118644
 
Score on Validation Set:
0.8426966292134831

Set B - Data segmented in length 500 (~1 Minute segments)

In [47]:
logisticB = LogisticRegression()
logisticB.fit(X_train2_500, Y_train2_500)
/usr/local/lib/python3.6/site-packages/sklearn/utils/validation.py:578: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
  y = column_or_1d(y, warn=True)
Out[47]:
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
          intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,
          penalty='l2', random_state=None, solver='liblinear', tol=0.0001,
          verbose=0, warm_start=False)
In [48]:
# Use score method to get accuracy of model
score = logisticB.score(X_test2_500, Y_test2_500)
print('Score on Test Set:')
print(score)
predictions = logisticB.predict(X_test2_500)
cm = metrics.confusion_matrix(Y_test2_500, predictions)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt=".3f", linewidths=.5, square = True, cmap = 'Blues_r');
plt.ylabel('Actual label');
plt.xlabel('Predicted label');
all_sample_title = 'Accuracy Score Log Regression Test data (Set B): {0}'.format(score)
plt.title(all_sample_title, size = 10);

score = logisticB.score(X_val2_500, Y_val2_500)
print(' ')
print('Score on Validation Set:')
print(score)
predictions = logisticB.predict(X_val2_500)
cm = metrics.confusion_matrix(Y_val2_500, predictions)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt=".3f", linewidths=.5, square = True, cmap = 'Blues_r');
plt.ylabel('Actual label');
plt.xlabel('Predicted label');
all_sample_title = 'Accuracy Score Log Regression Val data (Set B): {0}'.format(score)
plt.title(all_sample_title, size = 10);
Score on Test Set:
0.7837837837837838
 
Score on Validation Set:
0.7105263157894737

Remarks on Logistic Regression

When testing the logistic regression model, we tested it on both the test data and validation data. The model performed best for data segmented at length 100 samples relative to the data segmented at 500 samples. Intuitely this makes sense because the longer samples have much more variation from sample to sample and so it makes sense that the logistic regression model may not be able to pick up on this.

We also looked at the confusion matrices, to visualize whether it was mislabeling only one category. As seen above, especially for Set A, the model was mainly misclassifying "stressed" data as "relaxed". This makes sense, as some of the stressed segments could have been cut off in places where SCRs were only partially captured, or missed off completely.

Logistic Regression Test Set Accuracy:

Length 100 - 84.7%

Length 500 - 78%

Decision Tree Classifier

Set A

In [49]:
dta = DecisionTreeClassifier(criterion = "gini", random_state = 100,
                               max_depth=30, min_samples_leaf=3)
dta.fit(X_train2, Y_train2)
y_preda = dta.predict(X_test2)
print("Accuracy on Test data is: ")
accu2= accuracy_score(Y_test2,y_preda)*100
print(accu2)

cm = metrics.confusion_matrix(Y_test2, y_preda)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt=".3f", linewidths=.5, square = True, cmap = 'Blues_r');
plt.ylabel('Actual label');
plt.xlabel('Predicted label');
all_sample_title = 'Accuracy Score Decision Tree Test data (Set A): {0}'.format(accu2)
plt.title(all_sample_title, size = 10);

dta = DecisionTreeClassifier(criterion = "gini", random_state = 100,
                               max_depth=30, min_samples_leaf=3)
dta.fit(X_train2, Y_train2)
y_preda = dta.predict(X_val2)
print("Accuracy on Validation data is: ")
accu2= accuracy_score(Y_val2,y_preda)*100
print(accu2)
cm = metrics.confusion_matrix(Y_val2, y_preda)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt=".3f", linewidths=.5, square = True, cmap = 'Blues_r');
plt.ylabel('Actual label');
plt.xlabel('Predicted label');
all_sample_title = 'Accuracy Score Decision Tree Val data (Set A): {0}'.format(accu2)
plt.title(all_sample_title, size = 10);
Accuracy on Test data is: 
71.1864406779661
Accuracy on Validation data is: 
58.98876404494382

Set B

In [50]:
dtb = DecisionTreeClassifier(criterion = "gini", random_state = 100,
                               max_depth=30, min_samples_leaf=3)
dtb.fit(X_train2_500, Y_train2_500)
y_pred2 = dtb.predict(X_test2_500)
print("Accuracy on Test data is: ")
accu2= accuracy_score(Y_test2_500,y_pred2)*100
print(accu2)
cm = metrics.confusion_matrix(Y_test2_500, y_pred2)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt=".3f", linewidths=.5, square = True, cmap = 'Blues_r');
plt.ylabel('Actual label');
plt.xlabel('Predicted label');
all_sample_title = 'Accuracy Score Decision Tree Test data (Set B): {0}'.format(accu2)
plt.title(all_sample_title, size = 10);
dtb = DecisionTreeClassifier(criterion = "gini", random_state = 100,
                               max_depth=30, min_samples_leaf=3)
dtb.fit(X_train2_500, Y_train2_500)
y_pred2 = dtb.predict(X_val2_500)
print("Accuracy on Validation data is: ")
accu2= accuracy_score(Y_val2_500,y_pred2)*100
print(accu2)
cm = metrics.confusion_matrix(Y_val2_500, y_pred2)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt=".3f", linewidths=.5, square = True, cmap = 'Blues_r');
plt.ylabel('Actual label');
plt.xlabel('Predicted label');
all_sample_title = 'Accuracy Score Decision Tree Val data (Set B): {0}'.format(accu2)
plt.title(all_sample_title, size = 10);
Accuracy on Test data is: 
78.37837837837837
Accuracy on Validation data is: 
55.26315789473685

Remarks on Decision Tree

When testing the decision tree model, we tested it on both the test data and validation data. Unlike the logistic regression model, this model performed best for data segmented at length 500 samples relative to the data segmented at 100 samples. On the whole, this performed worse than the logistic regression model.

We also looked at the confusion matrices, to visualize whether it was mislabeling only one category. Unlike the logistic regression model, the model was mainly misclassifying "relaxed" data as "stressed". The classifier has as low as 55% accuracy on the validation data set. Because this was so low, we confirmed our hypothesis that the decision tree model is not well suited given the characteristics of the EDA data.

From here we moved on to use a Convolutional Neural Network.

Decision Tree Test Set Accuracy:

Length 100 - 71.2% Length 500 - 78.4%

. . . . . . . . . . . . . . . . Neural Network . . . . . . . . . . . . .

Convolutional Neural Network

The third binary classification model we used was a Convolutional Neural Network. As mentioned previously, we thought this would work the best due to the time invariant nature of SCRs that appear in stressed data.

We tested the various versions of the model and tweaked parameters in order to both minimize loss and maximize accuracy. Below is a description of some of the things we altered, and our reasoning for altering them.

The primary decision we made was the length of the filter kernel, and how many filters to use.

Below is a graph of what our stressed examples look like. We looked at this to choose the size of our kernal in the CNN. Here it is clear that a filter of about 40 samples would be able to caputure most of the SCR. For some CNNs, we also played around with filer kernel sizes of 10 samples and with multiple filters. We thought these might be able to pick up features of the SCR shape.

In [51]:
plt.figure(figsize= (15,5))
plt.title('Segmented Stress Example')
plt.plot(X_train2[100,:])
Out[51]:
[<matplotlib.lines.Line2D at 0x10be37710>]

We also spent a lot of time determining how many layers to do. Our intuition was that we would not need many layers because our data seemed to visually be something that a 1D CNN layer could easily pick up on. So we started with 1 layer and then added more. We found that adding more layers typically did not improve accuracy.

Other decisions involved altering the loss and activation functions. Since we are doing binary classification, we initially thought that we should use the keras loss function 'binary crossentropy', however we experienced better results using 'mse'. Along the same lines we found that activations on the 1D CNN layers worked best with 'relu' and tanh', though we played around with others. 'Relu' was a baseline activation that we used as a means of introducing non-linearities into our model. We found that it typically worked best on each 1D CNN layer. The activation for the lst dense layer that we used was typically sigmoid, which outputs between 0 and 1.

Understanding CNN filters

The main reason we used CNN was because we thought the filters would be able to pick up the shape of the SCR. This is why, for each CNN model below, we display the filters of the first CNN layer. This will be discussed more in the conclusion.

Below are different models we tested with labels at the top. We tested various CNN models on both data sets (the one with short length-100 samples and the one with long length-500 samples).

For each data set we compared, the best performing model is at the top. We tried many more models than are listed, but are displaying the ones that worked the best.

Note that we hardcoded our best accuracies in headings for each model but also output the accuracies which fluctuate everytime notebook runs due to shuffling of data.

Reshaping Data for CNN
Set A

X_train1 Y_train2 X_test1 Y_test2 X_val1 Y_val2

In [52]:
X_train1 = X_train2.reshape(820,100,1)
X_val1 = X_val2.reshape(178,100,1)
X_test1 = X_test2.reshape(177,100,1)
Set B

X_train_500 X_val_500 X_test_500 Y_train2_500 Y_val2_500 Y_test2_500

In [53]:
X_train_500 = X_train2_500.reshape(160,500,1)
X_val_500 = X_val2_500.reshape(38,500,1)
X_test_500 = X_test2_500.reshape(37,500,1)

Best Model Results for Set A

Our best result was a simple model using filter of size 40, in 1 CNN layer with one dense layer . The test accuracy was 87.5%.

In [54]:
test4 = Sequential()
test4.add(Conv1D(4, (40),
                 activation='relu',
                 input_shape=(100,1)))

test4.add(Flatten())

test4.add(Dense(1, activation = 'sigmoid'))

print(test4.summary())

test4.compile(loss='mean_squared_error', optimizer='Adam',metrics=['accuracy'])
history = test4.fit(X_train1,Y_train2, epochs=100, batch_size=100, validation_data=(X_val1,Y_val2))
WARNING:tensorflow:From /usr/local/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py:497: calling conv1d (from tensorflow.python.ops.nn_ops) with data_format=NHWC is deprecated and will be removed in a future version.
Instructions for updating:
`NHWC` for data_format is deprecated, use `NWC` instead
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d_1 (Conv1D)            (None, 61, 4)             164       
_________________________________________________________________
flatten_1 (Flatten)          (None, 244)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 245       
=================================================================
Total params: 409
Trainable params: 409
Non-trainable params: 0
_________________________________________________________________
None
Train on 820 samples, validate on 178 samples
Epoch 1/100
820/820 [==============================] - 1s 705us/step - loss: 0.2496 - acc: 0.5927 - val_loss: 0.2482 - val_acc: 0.7247
Epoch 2/100
820/820 [==============================] - 0s 67us/step - loss: 0.2472 - acc: 0.6463 - val_loss: 0.2468 - val_acc: 0.6180
Epoch 3/100
820/820 [==============================] - 0s 74us/step - loss: 0.2445 - acc: 0.6610 - val_loss: 0.2452 - val_acc: 0.4831
Epoch 4/100
820/820 [==============================] - 0s 67us/step - loss: 0.2411 - acc: 0.6488 - val_loss: 0.2430 - val_acc: 0.4719
Epoch 5/100
820/820 [==============================] - 0s 67us/step - loss: 0.2375 - acc: 0.6427 - val_loss: 0.2402 - val_acc: 0.4831
Epoch 6/100
820/820 [==============================] - 0s 67us/step - loss: 0.2337 - acc: 0.6402 - val_loss: 0.2382 - val_acc: 0.4663
Epoch 7/100
820/820 [==============================] - 0s 67us/step - loss: 0.2296 - acc: 0.6463 - val_loss: 0.2363 - val_acc: 0.4719
Epoch 8/100
820/820 [==============================] - 0s 71us/step - loss: 0.2262 - acc: 0.6439 - val_loss: 0.2359 - val_acc: 0.4831
Epoch 9/100
820/820 [==============================] - 0s 68us/step - loss: 0.2233 - acc: 0.6476 - val_loss: 0.2331 - val_acc: 0.5225
Epoch 10/100
820/820 [==============================] - 0s 69us/step - loss: 0.2206 - acc: 0.6537 - val_loss: 0.2322 - val_acc: 0.5281
Epoch 11/100
820/820 [==============================] - 0s 68us/step - loss: 0.2185 - acc: 0.6549 - val_loss: 0.2317 - val_acc: 0.5337
Epoch 12/100
820/820 [==============================] - 0s 67us/step - loss: 0.2168 - acc: 0.6585 - val_loss: 0.2296 - val_acc: 0.5506
Epoch 13/100
820/820 [==============================] - 0s 70us/step - loss: 0.2151 - acc: 0.6622 - val_loss: 0.2275 - val_acc: 0.5899
Epoch 14/100
820/820 [==============================] - 0s 71us/step - loss: 0.2138 - acc: 0.6744 - val_loss: 0.2240 - val_acc: 0.6067
Epoch 15/100
820/820 [==============================] - 0s 75us/step - loss: 0.2124 - acc: 0.6744 - val_loss: 0.2231 - val_acc: 0.6067
Epoch 16/100
820/820 [==============================] - 0s 75us/step - loss: 0.2112 - acc: 0.6695 - val_loss: 0.2224 - val_acc: 0.6067
Epoch 17/100
820/820 [==============================] - 0s 70us/step - loss: 0.2099 - acc: 0.6854 - val_loss: 0.2195 - val_acc: 0.6180
Epoch 18/100
820/820 [==============================] - 0s 76us/step - loss: 0.2088 - acc: 0.6915 - val_loss: 0.2201 - val_acc: 0.6180
Epoch 19/100
820/820 [==============================] - 0s 68us/step - loss: 0.2078 - acc: 0.6878 - val_loss: 0.2176 - val_acc: 0.6404
Epoch 20/100
820/820 [==============================] - 0s 67us/step - loss: 0.2067 - acc: 0.6976 - val_loss: 0.2158 - val_acc: 0.6629
Epoch 21/100
820/820 [==============================] - 0s 68us/step - loss: 0.2058 - acc: 0.6963 - val_loss: 0.2138 - val_acc: 0.6742
Epoch 22/100
820/820 [==============================] - 0s 69us/step - loss: 0.2049 - acc: 0.7000 - val_loss: 0.2125 - val_acc: 0.6798
Epoch 23/100
820/820 [==============================] - 0s 68us/step - loss: 0.2040 - acc: 0.7000 - val_loss: 0.2117 - val_acc: 0.6854
Epoch 24/100
820/820 [==============================] - 0s 64us/step - loss: 0.2030 - acc: 0.7085 - val_loss: 0.2074 - val_acc: 0.7360
Epoch 25/100
820/820 [==============================] - 0s 66us/step - loss: 0.2023 - acc: 0.7195 - val_loss: 0.2041 - val_acc: 0.8090
Epoch 26/100
820/820 [==============================] - 0s 68us/step - loss: 0.2015 - acc: 0.7220 - val_loss: 0.2053 - val_acc: 0.7809
Epoch 27/100
820/820 [==============================] - 0s 67us/step - loss: 0.2006 - acc: 0.7207 - val_loss: 0.2034 - val_acc: 0.7921
Epoch 28/100
820/820 [==============================] - 0s 66us/step - loss: 0.1999 - acc: 0.7195 - val_loss: 0.2024 - val_acc: 0.8034
Epoch 29/100
820/820 [==============================] - 0s 66us/step - loss: 0.1991 - acc: 0.7280 - val_loss: 0.2007 - val_acc: 0.8090
Epoch 30/100
820/820 [==============================] - 0s 67us/step - loss: 0.1984 - acc: 0.7305 - val_loss: 0.1992 - val_acc: 0.8258
Epoch 31/100
820/820 [==============================] - 0s 68us/step - loss: 0.1977 - acc: 0.7293 - val_loss: 0.1987 - val_acc: 0.8202
Epoch 32/100
820/820 [==============================] - 0s 70us/step - loss: 0.1971 - acc: 0.7293 - val_loss: 0.1984 - val_acc: 0.8202
Epoch 33/100
820/820 [==============================] - 0s 75us/step - loss: 0.1964 - acc: 0.7366 - val_loss: 0.1948 - val_acc: 0.8483
Epoch 34/100
820/820 [==============================] - 0s 69us/step - loss: 0.1958 - acc: 0.7415 - val_loss: 0.1913 - val_acc: 0.8708
Epoch 35/100
820/820 [==============================] - 0s 66us/step - loss: 0.1953 - acc: 0.7415 - val_loss: 0.1900 - val_acc: 0.8820
Epoch 36/100
820/820 [==============================] - 0s 72us/step - loss: 0.1946 - acc: 0.7427 - val_loss: 0.1895 - val_acc: 0.8820
Epoch 37/100
820/820 [==============================] - 0s 68us/step - loss: 0.1940 - acc: 0.7415 - val_loss: 0.1907 - val_acc: 0.8596
Epoch 38/100
820/820 [==============================] - 0s 67us/step - loss: 0.1934 - acc: 0.7402 - val_loss: 0.1908 - val_acc: 0.8539
Epoch 39/100
820/820 [==============================] - 0s 68us/step - loss: 0.1929 - acc: 0.7378 - val_loss: 0.1887 - val_acc: 0.8652
Epoch 40/100
820/820 [==============================] - 0s 88us/step - loss: 0.1924 - acc: 0.7451 - val_loss: 0.1884 - val_acc: 0.8596
Epoch 41/100
820/820 [==============================] - 0s 71us/step - loss: 0.1919 - acc: 0.7451 - val_loss: 0.1888 - val_acc: 0.8596
Epoch 42/100
820/820 [==============================] - 0s 67us/step - loss: 0.1915 - acc: 0.7390 - val_loss: 0.1886 - val_acc: 0.8539
Epoch 43/100
820/820 [==============================] - 0s 70us/step - loss: 0.1911 - acc: 0.7500 - val_loss: 0.1844 - val_acc: 0.8708
Epoch 44/100
820/820 [==============================] - 0s 67us/step - loss: 0.1905 - acc: 0.7500 - val_loss: 0.1833 - val_acc: 0.8820
Epoch 45/100
820/820 [==============================] - 0s 68us/step - loss: 0.1900 - acc: 0.7500 - val_loss: 0.1837 - val_acc: 0.8764
Epoch 46/100
820/820 [==============================] - 0s 67us/step - loss: 0.1896 - acc: 0.7439 - val_loss: 0.1837 - val_acc: 0.8708
Epoch 47/100
820/820 [==============================] - 0s 68us/step - loss: 0.1892 - acc: 0.7439 - val_loss: 0.1831 - val_acc: 0.8764
Epoch 48/100
820/820 [==============================] - 0s 69us/step - loss: 0.1887 - acc: 0.7476 - val_loss: 0.1809 - val_acc: 0.8820
Epoch 49/100
820/820 [==============================] - 0s 75us/step - loss: 0.1883 - acc: 0.7476 - val_loss: 0.1800 - val_acc: 0.8764
Epoch 50/100
820/820 [==============================] - 0s 68us/step - loss: 0.1879 - acc: 0.7512 - val_loss: 0.1774 - val_acc: 0.8876
Epoch 51/100
820/820 [==============================] - 0s 67us/step - loss: 0.1875 - acc: 0.7500 - val_loss: 0.1766 - val_acc: 0.8876
Epoch 52/100
820/820 [==============================] - 0s 70us/step - loss: 0.1871 - acc: 0.7512 - val_loss: 0.1757 - val_acc: 0.8876
Epoch 53/100
820/820 [==============================] - 0s 71us/step - loss: 0.1868 - acc: 0.7500 - val_loss: 0.1747 - val_acc: 0.8876
Epoch 54/100
820/820 [==============================] - 0s 66us/step - loss: 0.1864 - acc: 0.7500 - val_loss: 0.1743 - val_acc: 0.8876
Epoch 55/100
820/820 [==============================] - 0s 67us/step - loss: 0.1861 - acc: 0.7500 - val_loss: 0.1757 - val_acc: 0.8820
Epoch 56/100
820/820 [==============================] - 0s 70us/step - loss: 0.1857 - acc: 0.7512 - val_loss: 0.1759 - val_acc: 0.8820
Epoch 57/100
820/820 [==============================] - 0s 68us/step - loss: 0.1853 - acc: 0.7512 - val_loss: 0.1743 - val_acc: 0.8876
Epoch 58/100
820/820 [==============================] - 0s 65us/step - loss: 0.1850 - acc: 0.7512 - val_loss: 0.1705 - val_acc: 0.8933
Epoch 59/100
820/820 [==============================] - 0s 67us/step - loss: 0.1847 - acc: 0.7512 - val_loss: 0.1711 - val_acc: 0.8933
Epoch 60/100
820/820 [==============================] - 0s 70us/step - loss: 0.1844 - acc: 0.7500 - val_loss: 0.1696 - val_acc: 0.8933
Epoch 61/100
820/820 [==============================] - 0s 69us/step - loss: 0.1841 - acc: 0.7500 - val_loss: 0.1706 - val_acc: 0.8820
Epoch 62/100
820/820 [==============================] - 0s 67us/step - loss: 0.1837 - acc: 0.7512 - val_loss: 0.1707 - val_acc: 0.8820
Epoch 63/100
820/820 [==============================] - 0s 65us/step - loss: 0.1834 - acc: 0.7537 - val_loss: 0.1713 - val_acc: 0.8876
Epoch 64/100
820/820 [==============================] - 0s 71us/step - loss: 0.1831 - acc: 0.7549 - val_loss: 0.1703 - val_acc: 0.8820
Epoch 65/100
820/820 [==============================] - 0s 69us/step - loss: 0.1828 - acc: 0.7549 - val_loss: 0.1695 - val_acc: 0.8820
Epoch 66/100
820/820 [==============================] - 0s 76us/step - loss: 0.1826 - acc: 0.7561 - val_loss: 0.1712 - val_acc: 0.8820
Epoch 67/100
820/820 [==============================] - 0s 68us/step - loss: 0.1822 - acc: 0.7561 - val_loss: 0.1687 - val_acc: 0.8820
Epoch 68/100
820/820 [==============================] - 0s 68us/step - loss: 0.1820 - acc: 0.7524 - val_loss: 0.1683 - val_acc: 0.8820
Epoch 69/100
820/820 [==============================] - 0s 73us/step - loss: 0.1817 - acc: 0.7537 - val_loss: 0.1683 - val_acc: 0.8820
Epoch 70/100
820/820 [==============================] - 0s 71us/step - loss: 0.1814 - acc: 0.7549 - val_loss: 0.1687 - val_acc: 0.8764
Epoch 71/100
820/820 [==============================] - 0s 65us/step - loss: 0.1813 - acc: 0.7561 - val_loss: 0.1704 - val_acc: 0.8820
Epoch 72/100
820/820 [==============================] - 0s 67us/step - loss: 0.1810 - acc: 0.7561 - val_loss: 0.1688 - val_acc: 0.8820
Epoch 73/100
820/820 [==============================] - 0s 66us/step - loss: 0.1807 - acc: 0.7573 - val_loss: 0.1678 - val_acc: 0.8820
Epoch 74/100
820/820 [==============================] - 0s 68us/step - loss: 0.1804 - acc: 0.7573 - val_loss: 0.1668 - val_acc: 0.8820
Epoch 75/100
820/820 [==============================] - 0s 67us/step - loss: 0.1802 - acc: 0.7549 - val_loss: 0.1648 - val_acc: 0.8876
Epoch 76/100
820/820 [==============================] - 0s 65us/step - loss: 0.1800 - acc: 0.7500 - val_loss: 0.1627 - val_acc: 0.8820
Epoch 77/100
820/820 [==============================] - 0s 66us/step - loss: 0.1798 - acc: 0.7488 - val_loss: 0.1613 - val_acc: 0.8820
Epoch 78/100
820/820 [==============================] - 0s 67us/step - loss: 0.1797 - acc: 0.7512 - val_loss: 0.1636 - val_acc: 0.8876
Epoch 79/100
820/820 [==============================] - 0s 68us/step - loss: 0.1794 - acc: 0.7512 - val_loss: 0.1631 - val_acc: 0.8876
Epoch 80/100
820/820 [==============================] - 0s 67us/step - loss: 0.1792 - acc: 0.7512 - val_loss: 0.1631 - val_acc: 0.8876
Epoch 81/100
820/820 [==============================] - 0s 67us/step - loss: 0.1790 - acc: 0.7537 - val_loss: 0.1642 - val_acc: 0.8820
Epoch 82/100
820/820 [==============================] - 0s 66us/step - loss: 0.1788 - acc: 0.7585 - val_loss: 0.1636 - val_acc: 0.8820
Epoch 83/100
820/820 [==============================] - 0s 74us/step - loss: 0.1786 - acc: 0.7549 - val_loss: 0.1619 - val_acc: 0.8876
Epoch 84/100
820/820 [==============================] - 0s 68us/step - loss: 0.1784 - acc: 0.7524 - val_loss: 0.1615 - val_acc: 0.8876
Epoch 85/100
820/820 [==============================] - 0s 67us/step - loss: 0.1782 - acc: 0.7537 - val_loss: 0.1629 - val_acc: 0.8820
Epoch 86/100
820/820 [==============================] - 0s 66us/step - loss: 0.1780 - acc: 0.7549 - val_loss: 0.1616 - val_acc: 0.8820
Epoch 87/100
820/820 [==============================] - 0s 69us/step - loss: 0.1778 - acc: 0.7561 - val_loss: 0.1608 - val_acc: 0.8820
Epoch 88/100
820/820 [==============================] - 0s 70us/step - loss: 0.1776 - acc: 0.7512 - val_loss: 0.1588 - val_acc: 0.8820
Epoch 89/100
820/820 [==============================] - 0s 69us/step - loss: 0.1775 - acc: 0.7512 - val_loss: 0.1590 - val_acc: 0.8820
Epoch 90/100
820/820 [==============================] - 0s 68us/step - loss: 0.1773 - acc: 0.7512 - val_loss: 0.1594 - val_acc: 0.8764
Epoch 91/100
820/820 [==============================] - 0s 68us/step - loss: 0.1771 - acc: 0.7573 - val_loss: 0.1608 - val_acc: 0.8820
Epoch 92/100
820/820 [==============================] - 0s 69us/step - loss: 0.1769 - acc: 0.7585 - val_loss: 0.1635 - val_acc: 0.8820
Epoch 93/100
820/820 [==============================] - 0s 67us/step - loss: 0.1768 - acc: 0.7622 - val_loss: 0.1627 - val_acc: 0.8820
Epoch 94/100
820/820 [==============================] - 0s 68us/step - loss: 0.1766 - acc: 0.7610 - val_loss: 0.1616 - val_acc: 0.8820
Epoch 95/100
820/820 [==============================] - 0s 68us/step - loss: 0.1764 - acc: 0.7585 - val_loss: 0.1591 - val_acc: 0.8820
Epoch 96/100
820/820 [==============================] - 0s 67us/step - loss: 0.1763 - acc: 0.7561 - val_loss: 0.1563 - val_acc: 0.8820
Epoch 97/100
820/820 [==============================] - 0s 66us/step - loss: 0.1762 - acc: 0.7549 - val_loss: 0.1575 - val_acc: 0.8876
Epoch 98/100
820/820 [==============================] - 0s 69us/step - loss: 0.1760 - acc: 0.7549 - val_loss: 0.1562 - val_acc: 0.8820
Epoch 99/100
820/820 [==============================] - 0s 67us/step - loss: 0.1759 - acc: 0.7549 - val_loss: 0.1562 - val_acc: 0.8820
Epoch 100/100
820/820 [==============================] - 0s 67us/step - loss: 0.1758 - acc: 0.7573 - val_loss: 0.1580 - val_acc: 0.8820
In [55]:
acc = history.history['acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
acc = history.history['acc']
val_acc = history.history['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Acc')
plt.legend()
plt.show()

results = test4.evaluate(X_test1, Y_test2)
print(results)
top_layer = test4.layers[0]
plt.title('Visualize First Layer Filter 1')
plt.plot(top_layer.get_weights()[0][:, :, 0].squeeze())
plt.show()
plt.title('Visualize First Layer Filter 2')
plt.plot(top_layer.get_weights()[0][:, :, 1].squeeze())
plt.show()
plt.title('Visualize First Layer Filter 3')
plt.plot(top_layer.get_weights()[0][:, :, 2].squeeze())
plt.show()
plt.title('Visualize First Layer Filter 4')
plt.plot(top_layer.get_weights()[0][:, :, 3].squeeze())
plt.show()
177/177 [==============================] - 0s 71us/step
[0.17723463947153362, 0.8644067799977664]

We tried increasing the number of filters to 6 Filters each of length 40 in the first layer and had a test acc of 84%

In [56]:
test11 = Sequential()
test11.add(Conv1D(6, (40),
                 activation='relu',
                 input_shape=(100,1)))

test11.add(Flatten())

test11.add(Dense(1, activation = 'sigmoid'))

print(test11.summary())

test11.compile(loss='mean_squared_error', optimizer='Adam',metrics=['accuracy'])
history = test11.fit(X_train1,Y_train2, epochs=300, batch_size=100, validation_data=(X_val1,Y_val2))
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d_2 (Conv1D)            (None, 61, 6)             246       
_________________________________________________________________
flatten_2 (Flatten)          (None, 366)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 367       
=================================================================
Total params: 613
Trainable params: 613
Non-trainable params: 0
_________________________________________________________________
None
Train on 820 samples, validate on 178 samples
Epoch 1/300
820/820 [==============================] - 0s 419us/step - loss: 0.2492 - acc: 0.6061 - val_loss: 0.2480 - val_acc: 0.6517
Epoch 2/300
820/820 [==============================] - 0s 70us/step - loss: 0.2470 - acc: 0.6524 - val_loss: 0.2463 - val_acc: 0.4045
Epoch 3/300
820/820 [==============================] - 0s 67us/step - loss: 0.2445 - acc: 0.5207 - val_loss: 0.2436 - val_acc: 0.3989
Epoch 4/300
820/820 [==============================] - 0s 69us/step - loss: 0.2411 - acc: 0.6537 - val_loss: 0.2397 - val_acc: 0.4663
Epoch 5/300
820/820 [==============================] - 0s 71us/step - loss: 0.2371 - acc: 0.6415 - val_loss: 0.2360 - val_acc: 0.4888
Epoch 6/300
820/820 [==============================] - 0s 66us/step - loss: 0.2329 - acc: 0.6500 - val_loss: 0.2336 - val_acc: 0.4831
Epoch 7/300
820/820 [==============================] - 0s 67us/step - loss: 0.2288 - acc: 0.6366 - val_loss: 0.2314 - val_acc: 0.4944
Epoch 8/300
820/820 [==============================] - 0s 69us/step - loss: 0.2252 - acc: 0.6329 - val_loss: 0.2310 - val_acc: 0.4944
Epoch 9/300
820/820 [==============================] - 0s 69us/step - loss: 0.2221 - acc: 0.6415 - val_loss: 0.2291 - val_acc: 0.4944
Epoch 10/300
820/820 [==============================] - 0s 69us/step - loss: 0.2195 - acc: 0.6451 - val_loss: 0.2275 - val_acc: 0.5169
Epoch 11/300
820/820 [==============================] - 0s 67us/step - loss: 0.2174 - acc: 0.6512 - val_loss: 0.2274 - val_acc: 0.5225
Epoch 12/300
820/820 [==============================] - 0s 70us/step - loss: 0.2156 - acc: 0.6561 - val_loss: 0.2253 - val_acc: 0.5449
Epoch 13/300
820/820 [==============================] - 0s 71us/step - loss: 0.2140 - acc: 0.6646 - val_loss: 0.2221 - val_acc: 0.5899
Epoch 14/300
820/820 [==============================] - 0s 102us/step - loss: 0.2125 - acc: 0.6817 - val_loss: 0.2210 - val_acc: 0.5955
Epoch 15/300
820/820 [==============================] - 0s 88us/step - loss: 0.2111 - acc: 0.6817 - val_loss: 0.2199 - val_acc: 0.6124
Epoch 16/300
820/820 [==============================] - 0s 68us/step - loss: 0.2099 - acc: 0.6829 - val_loss: 0.2182 - val_acc: 0.6124
Epoch 17/300
820/820 [==============================] - 0s 84us/step - loss: 0.2086 - acc: 0.6829 - val_loss: 0.2170 - val_acc: 0.6292
Epoch 18/300
820/820 [==============================] - 0s 70us/step - loss: 0.2075 - acc: 0.6866 - val_loss: 0.2164 - val_acc: 0.6348
Epoch 19/300
820/820 [==============================] - 0s 69us/step - loss: 0.2064 - acc: 0.7098 - val_loss: 0.2097 - val_acc: 0.7135
Epoch 20/300
820/820 [==============================] - 0s 67us/step - loss: 0.2054 - acc: 0.7207 - val_loss: 0.2084 - val_acc: 0.7191
Epoch 21/300
820/820 [==============================] - 0s 74us/step - loss: 0.2044 - acc: 0.7146 - val_loss: 0.2088 - val_acc: 0.7191
Epoch 22/300
820/820 [==============================] - 0s 67us/step - loss: 0.2034 - acc: 0.7085 - val_loss: 0.2106 - val_acc: 0.6742
Epoch 23/300
820/820 [==============================] - 0s 73us/step - loss: 0.2027 - acc: 0.7073 - val_loss: 0.2110 - val_acc: 0.6685
Epoch 24/300
820/820 [==============================] - 0s 68us/step - loss: 0.2018 - acc: 0.7146 - val_loss: 0.2057 - val_acc: 0.7303
Epoch 25/300
820/820 [==============================] - 0s 71us/step - loss: 0.2009 - acc: 0.7280 - val_loss: 0.2025 - val_acc: 0.7640
Epoch 26/300
820/820 [==============================] - 0s 70us/step - loss: 0.2002 - acc: 0.7195 - val_loss: 0.2024 - val_acc: 0.7640
Epoch 27/300
820/820 [==============================] - 0s 68us/step - loss: 0.1995 - acc: 0.7183 - val_loss: 0.1990 - val_acc: 0.8146
Epoch 28/300
820/820 [==============================] - 0s 68us/step - loss: 0.1987 - acc: 0.7183 - val_loss: 0.2004 - val_acc: 0.7697
Epoch 29/300
820/820 [==============================] - 0s 68us/step - loss: 0.1980 - acc: 0.7268 - val_loss: 0.1995 - val_acc: 0.7753
Epoch 30/300
820/820 [==============================] - 0s 81us/step - loss: 0.1973 - acc: 0.7195 - val_loss: 0.1979 - val_acc: 0.8090
Epoch 31/300
820/820 [==============================] - 0s 86us/step - loss: 0.1966 - acc: 0.7183 - val_loss: 0.1950 - val_acc: 0.8315
Epoch 32/300
820/820 [==============================] - 0s 71us/step - loss: 0.1960 - acc: 0.7329 - val_loss: 0.1913 - val_acc: 0.8596
Epoch 33/300
820/820 [==============================] - 0s 71us/step - loss: 0.1953 - acc: 0.7366 - val_loss: 0.1896 - val_acc: 0.8596
Epoch 34/300
820/820 [==============================] - 0s 71us/step - loss: 0.1946 - acc: 0.7378 - val_loss: 0.1887 - val_acc: 0.8596
Epoch 35/300
820/820 [==============================] - 0s 67us/step - loss: 0.1940 - acc: 0.7366 - val_loss: 0.1882 - val_acc: 0.8596
Epoch 36/300
820/820 [==============================] - 0s 68us/step - loss: 0.1935 - acc: 0.7390 - val_loss: 0.1892 - val_acc: 0.8539
Epoch 37/300
820/820 [==============================] - 0s 68us/step - loss: 0.1929 - acc: 0.7390 - val_loss: 0.1869 - val_acc: 0.8596
Epoch 38/300
820/820 [==============================] - 0s 72us/step - loss: 0.1924 - acc: 0.7415 - val_loss: 0.1831 - val_acc: 0.8820
Epoch 39/300
820/820 [==============================] - 0s 66us/step - loss: 0.1917 - acc: 0.7451 - val_loss: 0.1816 - val_acc: 0.8820
Epoch 40/300
820/820 [==============================] - 0s 70us/step - loss: 0.1911 - acc: 0.7451 - val_loss: 0.1804 - val_acc: 0.8876
Epoch 41/300
820/820 [==============================] - 0s 66us/step - loss: 0.1905 - acc: 0.7451 - val_loss: 0.1792 - val_acc: 0.8876
Epoch 42/300
820/820 [==============================] - 0s 83us/step - loss: 0.1900 - acc: 0.7439 - val_loss: 0.1761 - val_acc: 0.8933
Epoch 43/300
820/820 [==============================] - 0s 67us/step - loss: 0.1896 - acc: 0.7451 - val_loss: 0.1746 - val_acc: 0.8989
Epoch 44/300
820/820 [==============================] - 0s 68us/step - loss: 0.1890 - acc: 0.7463 - val_loss: 0.1751 - val_acc: 0.8989
Epoch 45/300
820/820 [==============================] - 0s 68us/step - loss: 0.1886 - acc: 0.7427 - val_loss: 0.1756 - val_acc: 0.8933
Epoch 46/300
820/820 [==============================] - 0s 71us/step - loss: 0.1881 - acc: 0.7439 - val_loss: 0.1736 - val_acc: 0.8989
Epoch 47/300
820/820 [==============================] - 0s 73us/step - loss: 0.1876 - acc: 0.7451 - val_loss: 0.1750 - val_acc: 0.8933
Epoch 48/300
820/820 [==============================] - 0s 93us/step - loss: 0.1872 - acc: 0.7451 - val_loss: 0.1739 - val_acc: 0.8933
Epoch 49/300
820/820 [==============================] - 0s 72us/step - loss: 0.1868 - acc: 0.7463 - val_loss: 0.1691 - val_acc: 0.9157
Epoch 50/300
820/820 [==============================] - 0s 77us/step - loss: 0.1862 - acc: 0.7488 - val_loss: 0.1703 - val_acc: 0.9045
Epoch 51/300
820/820 [==============================] - 0s 68us/step - loss: 0.1858 - acc: 0.7488 - val_loss: 0.1684 - val_acc: 0.9101
Epoch 52/300
820/820 [==============================] - 0s 69us/step - loss: 0.1854 - acc: 0.7488 - val_loss: 0.1670 - val_acc: 0.9157
Epoch 53/300
820/820 [==============================] - 0s 66us/step - loss: 0.1850 - acc: 0.7488 - val_loss: 0.1667 - val_acc: 0.9101
Epoch 54/300
820/820 [==============================] - 0s 68us/step - loss: 0.1845 - acc: 0.7488 - val_loss: 0.1654 - val_acc: 0.9101
Epoch 55/300
820/820 [==============================] - 0s 71us/step - loss: 0.1843 - acc: 0.7488 - val_loss: 0.1668 - val_acc: 0.9101
Epoch 56/300
820/820 [==============================] - 0s 67us/step - loss: 0.1838 - acc: 0.7500 - val_loss: 0.1669 - val_acc: 0.9045
Epoch 57/300
820/820 [==============================] - 0s 68us/step - loss: 0.1834 - acc: 0.7488 - val_loss: 0.1623 - val_acc: 0.9101
Epoch 58/300
820/820 [==============================] - 0s 68us/step - loss: 0.1831 - acc: 0.7488 - val_loss: 0.1587 - val_acc: 0.9213
Epoch 59/300
820/820 [==============================] - 0s 74us/step - loss: 0.1828 - acc: 0.7500 - val_loss: 0.1588 - val_acc: 0.9213
Epoch 60/300
820/820 [==============================] - 0s 68us/step - loss: 0.1824 - acc: 0.7488 - val_loss: 0.1578 - val_acc: 0.9213
Epoch 61/300
820/820 [==============================] - 0s 70us/step - loss: 0.1821 - acc: 0.7500 - val_loss: 0.1574 - val_acc: 0.9157
Epoch 62/300
820/820 [==============================] - 0s 65us/step - loss: 0.1818 - acc: 0.7524 - val_loss: 0.1581 - val_acc: 0.9157
Epoch 63/300
820/820 [==============================] - 0s 70us/step - loss: 0.1813 - acc: 0.7537 - val_loss: 0.1565 - val_acc: 0.9157
Epoch 64/300
820/820 [==============================] - 0s 70us/step - loss: 0.1811 - acc: 0.7512 - val_loss: 0.1546 - val_acc: 0.9157
Epoch 65/300
820/820 [==============================] - 0s 69us/step - loss: 0.1808 - acc: 0.7512 - val_loss: 0.1579 - val_acc: 0.9101
Epoch 66/300
820/820 [==============================] - 0s 68us/step - loss: 0.1805 - acc: 0.7512 - val_loss: 0.1537 - val_acc: 0.9101
Epoch 67/300
820/820 [==============================] - 0s 73us/step - loss: 0.1802 - acc: 0.7549 - val_loss: 0.1512 - val_acc: 0.9101
Epoch 68/300
820/820 [==============================] - 0s 76us/step - loss: 0.1800 - acc: 0.7524 - val_loss: 0.1520 - val_acc: 0.9101
Epoch 69/300
820/820 [==============================] - 0s 66us/step - loss: 0.1797 - acc: 0.7561 - val_loss: 0.1524 - val_acc: 0.9101
Epoch 70/300
820/820 [==============================] - 0s 69us/step - loss: 0.1794 - acc: 0.7537 - val_loss: 0.1527 - val_acc: 0.9101
Epoch 71/300
820/820 [==============================] - 0s 70us/step - loss: 0.1791 - acc: 0.7512 - val_loss: 0.1544 - val_acc: 0.9101
Epoch 72/300
820/820 [==============================] - 0s 71us/step - loss: 0.1789 - acc: 0.7512 - val_loss: 0.1539 - val_acc: 0.9101
Epoch 73/300
820/820 [==============================] - 0s 70us/step - loss: 0.1786 - acc: 0.7512 - val_loss: 0.1516 - val_acc: 0.9101
Epoch 74/300
820/820 [==============================] - 0s 69us/step - loss: 0.1784 - acc: 0.7500 - val_loss: 0.1509 - val_acc: 0.9101
Epoch 75/300
820/820 [==============================] - 0s 68us/step - loss: 0.1782 - acc: 0.7488 - val_loss: 0.1512 - val_acc: 0.9101
Epoch 76/300
820/820 [==============================] - 0s 73us/step - loss: 0.1779 - acc: 0.7512 - val_loss: 0.1501 - val_acc: 0.9101
Epoch 77/300
820/820 [==============================] - 0s 70us/step - loss: 0.1777 - acc: 0.7524 - val_loss: 0.1481 - val_acc: 0.9101
Epoch 78/300
820/820 [==============================] - 0s 68us/step - loss: 0.1776 - acc: 0.7537 - val_loss: 0.1442 - val_acc: 0.9157
Epoch 79/300
820/820 [==============================] - 0s 67us/step - loss: 0.1774 - acc: 0.7561 - val_loss: 0.1450 - val_acc: 0.9157
Epoch 80/300
820/820 [==============================] - 0s 72us/step - loss: 0.1771 - acc: 0.7549 - val_loss: 0.1459 - val_acc: 0.9101
Epoch 81/300
820/820 [==============================] - 0s 67us/step - loss: 0.1769 - acc: 0.7537 - val_loss: 0.1462 - val_acc: 0.9101
Epoch 82/300
820/820 [==============================] - 0s 69us/step - loss: 0.1767 - acc: 0.7537 - val_loss: 0.1468 - val_acc: 0.9101
Epoch 83/300
820/820 [==============================] - 0s 66us/step - loss: 0.1765 - acc: 0.7512 - val_loss: 0.1431 - val_acc: 0.9157
Epoch 84/300
820/820 [==============================] - 0s 69us/step - loss: 0.1763 - acc: 0.7524 - val_loss: 0.1439 - val_acc: 0.9101
Epoch 85/300
820/820 [==============================] - 0s 78us/step - loss: 0.1761 - acc: 0.7524 - val_loss: 0.1439 - val_acc: 0.9101
Epoch 86/300
820/820 [==============================] - 0s 70us/step - loss: 0.1760 - acc: 0.7537 - val_loss: 0.1449 - val_acc: 0.9101
Epoch 87/300
820/820 [==============================] - 0s 69us/step - loss: 0.1758 - acc: 0.7561 - val_loss: 0.1470 - val_acc: 0.9101
Epoch 88/300
820/820 [==============================] - 0s 68us/step - loss: 0.1757 - acc: 0.7561 - val_loss: 0.1460 - val_acc: 0.9101
Epoch 89/300
820/820 [==============================] - 0s 76us/step - loss: 0.1754 - acc: 0.7549 - val_loss: 0.1405 - val_acc: 0.9101
Epoch 90/300
820/820 [==============================] - 0s 69us/step - loss: 0.1753 - acc: 0.7549 - val_loss: 0.1401 - val_acc: 0.9157
Epoch 91/300
820/820 [==============================] - 0s 68us/step - loss: 0.1751 - acc: 0.7524 - val_loss: 0.1397 - val_acc: 0.9101
Epoch 92/300
820/820 [==============================] - 0s 70us/step - loss: 0.1749 - acc: 0.7537 - val_loss: 0.1409 - val_acc: 0.9101
Epoch 93/300
820/820 [==============================] - 0s 71us/step - loss: 0.1748 - acc: 0.7549 - val_loss: 0.1433 - val_acc: 0.9045
Epoch 94/300
820/820 [==============================] - 0s 68us/step - loss: 0.1746 - acc: 0.7549 - val_loss: 0.1426 - val_acc: 0.9045
Epoch 95/300
820/820 [==============================] - 0s 69us/step - loss: 0.1745 - acc: 0.7537 - val_loss: 0.1404 - val_acc: 0.9157
Epoch 96/300
820/820 [==============================] - 0s 70us/step - loss: 0.1744 - acc: 0.7537 - val_loss: 0.1400 - val_acc: 0.9101
Epoch 97/300
820/820 [==============================] - 0s 68us/step - loss: 0.1743 - acc: 0.7537 - val_loss: 0.1408 - val_acc: 0.9101
Epoch 98/300
820/820 [==============================] - 0s 71us/step - loss: 0.1741 - acc: 0.7561 - val_loss: 0.1437 - val_acc: 0.9101
Epoch 99/300
820/820 [==============================] - 0s 71us/step - loss: 0.1740 - acc: 0.7512 - val_loss: 0.1427 - val_acc: 0.9157
Epoch 100/300
820/820 [==============================] - 0s 68us/step - loss: 0.1739 - acc: 0.7512 - val_loss: 0.1427 - val_acc: 0.9157
Epoch 101/300
820/820 [==============================] - 0s 71us/step - loss: 0.1737 - acc: 0.7500 - val_loss: 0.1408 - val_acc: 0.9101
Epoch 102/300
820/820 [==============================] - 0s 72us/step - loss: 0.1736 - acc: 0.7524 - val_loss: 0.1364 - val_acc: 0.9157
Epoch 103/300
820/820 [==============================] - 0s 66us/step - loss: 0.1735 - acc: 0.7537 - val_loss: 0.1360 - val_acc: 0.9101
Epoch 104/300
820/820 [==============================] - 0s 69us/step - loss: 0.1734 - acc: 0.7537 - val_loss: 0.1334 - val_acc: 0.9157
Epoch 105/300
820/820 [==============================] - 0s 65us/step - loss: 0.1733 - acc: 0.7524 - val_loss: 0.1347 - val_acc: 0.9101
Epoch 106/300
820/820 [==============================] - 0s 70us/step - loss: 0.1732 - acc: 0.7561 - val_loss: 0.1363 - val_acc: 0.9157
Epoch 107/300
820/820 [==============================] - 0s 69us/step - loss: 0.1729 - acc: 0.7573 - val_loss: 0.1367 - val_acc: 0.9101
Epoch 108/300
820/820 [==============================] - 0s 69us/step - loss: 0.1728 - acc: 0.7549 - val_loss: 0.1383 - val_acc: 0.9045
Epoch 109/300
820/820 [==============================] - 0s 68us/step - loss: 0.1728 - acc: 0.7573 - val_loss: 0.1420 - val_acc: 0.9101
Epoch 110/300
820/820 [==============================] - 0s 71us/step - loss: 0.1727 - acc: 0.7549 - val_loss: 0.1407 - val_acc: 0.9101
Epoch 111/300
820/820 [==============================] - 0s 69us/step - loss: 0.1725 - acc: 0.7561 - val_loss: 0.1377 - val_acc: 0.9045
Epoch 112/300
820/820 [==============================] - 0s 68us/step - loss: 0.1724 - acc: 0.7549 - val_loss: 0.1376 - val_acc: 0.9045
Epoch 113/300
820/820 [==============================] - 0s 70us/step - loss: 0.1724 - acc: 0.7549 - val_loss: 0.1379 - val_acc: 0.9045
Epoch 114/300
820/820 [==============================] - 0s 74us/step - loss: 0.1722 - acc: 0.7537 - val_loss: 0.1383 - val_acc: 0.9045
Epoch 115/300
820/820 [==============================] - 0s 81us/step - loss: 0.1722 - acc: 0.7537 - val_loss: 0.1381 - val_acc: 0.9045
Epoch 116/300
820/820 [==============================] - 0s 70us/step - loss: 0.1721 - acc: 0.7561 - val_loss: 0.1337 - val_acc: 0.9101
Epoch 117/300
820/820 [==============================] - 0s 69us/step - loss: 0.1720 - acc: 0.7573 - val_loss: 0.1338 - val_acc: 0.9101
Epoch 118/300
820/820 [==============================] - 0s 74us/step - loss: 0.1719 - acc: 0.7561 - val_loss: 0.1367 - val_acc: 0.9045
Epoch 119/300
820/820 [==============================] - 0s 71us/step - loss: 0.1718 - acc: 0.7549 - val_loss: 0.1363 - val_acc: 0.8989
Epoch 120/300
820/820 [==============================] - 0s 69us/step - loss: 0.1717 - acc: 0.7598 - val_loss: 0.1372 - val_acc: 0.8989
Epoch 121/300
820/820 [==============================] - 0s 67us/step - loss: 0.1716 - acc: 0.7585 - val_loss: 0.1374 - val_acc: 0.9045
Epoch 122/300
820/820 [==============================] - 0s 69us/step - loss: 0.1715 - acc: 0.7598 - val_loss: 0.1347 - val_acc: 0.8989
Epoch 123/300
820/820 [==============================] - 0s 71us/step - loss: 0.1714 - acc: 0.7573 - val_loss: 0.1350 - val_acc: 0.8989
Epoch 124/300
820/820 [==============================] - 0s 71us/step - loss: 0.1713 - acc: 0.7585 - val_loss: 0.1333 - val_acc: 0.8989
Epoch 125/300
820/820 [==============================] - 0s 69us/step - loss: 0.1712 - acc: 0.7598 - val_loss: 0.1315 - val_acc: 0.8989
Epoch 126/300
820/820 [==============================] - 0s 69us/step - loss: 0.1712 - acc: 0.7610 - val_loss: 0.1284 - val_acc: 0.9101
Epoch 127/300
820/820 [==============================] - 0s 68us/step - loss: 0.1711 - acc: 0.7610 - val_loss: 0.1306 - val_acc: 0.9045
Epoch 128/300
820/820 [==============================] - 0s 71us/step - loss: 0.1710 - acc: 0.7585 - val_loss: 0.1325 - val_acc: 0.9045
Epoch 129/300
820/820 [==============================] - 0s 68us/step - loss: 0.1709 - acc: 0.7573 - val_loss: 0.1322 - val_acc: 0.9045
Epoch 130/300
820/820 [==============================] - 0s 69us/step - loss: 0.1708 - acc: 0.7598 - val_loss: 0.1313 - val_acc: 0.9045
Epoch 131/300
820/820 [==============================] - 0s 70us/step - loss: 0.1708 - acc: 0.7561 - val_loss: 0.1341 - val_acc: 0.9045
Epoch 132/300
820/820 [==============================] - 0s 72us/step - loss: 0.1707 - acc: 0.7573 - val_loss: 0.1331 - val_acc: 0.9045
Epoch 133/300
820/820 [==============================] - 0s 69us/step - loss: 0.1706 - acc: 0.7561 - val_loss: 0.1309 - val_acc: 0.9045
Epoch 134/300
820/820 [==============================] - 0s 68us/step - loss: 0.1706 - acc: 0.7585 - val_loss: 0.1295 - val_acc: 0.9045
Epoch 135/300
820/820 [==============================] - 0s 68us/step - loss: 0.1705 - acc: 0.7598 - val_loss: 0.1303 - val_acc: 0.9045
Epoch 136/300
820/820 [==============================] - 0s 75us/step - loss: 0.1704 - acc: 0.7598 - val_loss: 0.1293 - val_acc: 0.9101
Epoch 137/300
820/820 [==============================] - 0s 75us/step - loss: 0.1703 - acc: 0.7598 - val_loss: 0.1311 - val_acc: 0.9045
Epoch 138/300
820/820 [==============================] - 0s 67us/step - loss: 0.1703 - acc: 0.7585 - val_loss: 0.1336 - val_acc: 0.9045
Epoch 139/300
820/820 [==============================] - 0s 67us/step - loss: 0.1702 - acc: 0.7585 - val_loss: 0.1352 - val_acc: 0.9045
Epoch 140/300
820/820 [==============================] - 0s 69us/step - loss: 0.1701 - acc: 0.7610 - val_loss: 0.1331 - val_acc: 0.9045
Epoch 141/300
820/820 [==============================] - 0s 71us/step - loss: 0.1700 - acc: 0.7585 - val_loss: 0.1301 - val_acc: 0.9045
Epoch 142/300
820/820 [==============================] - 0s 66us/step - loss: 0.1700 - acc: 0.7598 - val_loss: 0.1296 - val_acc: 0.8989
Epoch 143/300
820/820 [==============================] - 0s 69us/step - loss: 0.1700 - acc: 0.7585 - val_loss: 0.1292 - val_acc: 0.9045
Epoch 144/300
820/820 [==============================] - 0s 67us/step - loss: 0.1699 - acc: 0.7598 - val_loss: 0.1315 - val_acc: 0.9045
Epoch 145/300
820/820 [==============================] - 0s 74us/step - loss: 0.1697 - acc: 0.7598 - val_loss: 0.1322 - val_acc: 0.9045
Epoch 146/300
820/820 [==============================] - 0s 66us/step - loss: 0.1697 - acc: 0.7585 - val_loss: 0.1301 - val_acc: 0.9045
Epoch 147/300
820/820 [==============================] - 0s 69us/step - loss: 0.1697 - acc: 0.7610 - val_loss: 0.1283 - val_acc: 0.9101
Epoch 148/300
820/820 [==============================] - 0s 69us/step - loss: 0.1697 - acc: 0.7598 - val_loss: 0.1305 - val_acc: 0.9045
Epoch 149/300
820/820 [==============================] - 0s 73us/step - loss: 0.1695 - acc: 0.7598 - val_loss: 0.1313 - val_acc: 0.9045
Epoch 150/300
820/820 [==============================] - 0s 69us/step - loss: 0.1694 - acc: 0.7585 - val_loss: 0.1326 - val_acc: 0.9045
Epoch 151/300
820/820 [==============================] - 0s 68us/step - loss: 0.1694 - acc: 0.7585 - val_loss: 0.1341 - val_acc: 0.9045
Epoch 152/300
820/820 [==============================] - 0s 68us/step - loss: 0.1694 - acc: 0.7610 - val_loss: 0.1355 - val_acc: 0.9045
Epoch 153/300
820/820 [==============================] - 0s 69us/step - loss: 0.1693 - acc: 0.7573 - val_loss: 0.1364 - val_acc: 0.8989
Epoch 154/300
820/820 [==============================] - 0s 78us/step - loss: 0.1691 - acc: 0.7598 - val_loss: 0.1329 - val_acc: 0.9045
Epoch 155/300
820/820 [==============================] - 0s 72us/step - loss: 0.1691 - acc: 0.7598 - val_loss: 0.1299 - val_acc: 0.9045
Epoch 156/300
820/820 [==============================] - 0s 69us/step - loss: 0.1690 - acc: 0.7585 - val_loss: 0.1312 - val_acc: 0.9045
Epoch 157/300
820/820 [==============================] - 0s 68us/step - loss: 0.1689 - acc: 0.7585 - val_loss: 0.1326 - val_acc: 0.9045
Epoch 158/300
820/820 [==============================] - 0s 72us/step - loss: 0.1689 - acc: 0.7610 - val_loss: 0.1323 - val_acc: 0.9045
Epoch 159/300
820/820 [==============================] - 0s 66us/step - loss: 0.1688 - acc: 0.7598 - val_loss: 0.1314 - val_acc: 0.9045
Epoch 160/300
820/820 [==============================] - 0s 68us/step - loss: 0.1687 - acc: 0.7598 - val_loss: 0.1313 - val_acc: 0.9045
Epoch 161/300
820/820 [==============================] - 0s 68us/step - loss: 0.1687 - acc: 0.7598 - val_loss: 0.1323 - val_acc: 0.9045
Epoch 162/300
820/820 [==============================] - 0s 72us/step - loss: 0.1687 - acc: 0.7598 - val_loss: 0.1291 - val_acc: 0.9045
Epoch 163/300
820/820 [==============================] - 0s 67us/step - loss: 0.1686 - acc: 0.7610 - val_loss: 0.1277 - val_acc: 0.9101
Epoch 164/300
820/820 [==============================] - 0s 71us/step - loss: 0.1687 - acc: 0.7585 - val_loss: 0.1304 - val_acc: 0.9045
Epoch 165/300
820/820 [==============================] - 0s 71us/step - loss: 0.1687 - acc: 0.7598 - val_loss: 0.1281 - val_acc: 0.9101
Epoch 166/300
820/820 [==============================] - 0s 71us/step - loss: 0.1685 - acc: 0.7598 - val_loss: 0.1285 - val_acc: 0.9045
Epoch 167/300
820/820 [==============================] - 0s 70us/step - loss: 0.1684 - acc: 0.7610 - val_loss: 0.1288 - val_acc: 0.9045
Epoch 168/300
820/820 [==============================] - 0s 67us/step - loss: 0.1683 - acc: 0.7598 - val_loss: 0.1293 - val_acc: 0.9045
Epoch 169/300
820/820 [==============================] - 0s 67us/step - loss: 0.1684 - acc: 0.7585 - val_loss: 0.1309 - val_acc: 0.9045
Epoch 170/300
820/820 [==============================] - 0s 71us/step - loss: 0.1681 - acc: 0.7598 - val_loss: 0.1308 - val_acc: 0.9045
Epoch 171/300
820/820 [==============================] - 0s 80us/step - loss: 0.1681 - acc: 0.7585 - val_loss: 0.1309 - val_acc: 0.9045
Epoch 172/300
820/820 [==============================] - 0s 67us/step - loss: 0.1681 - acc: 0.7585 - val_loss: 0.1321 - val_acc: 0.9045
Epoch 173/300
820/820 [==============================] - 0s 69us/step - loss: 0.1680 - acc: 0.7585 - val_loss: 0.1312 - val_acc: 0.9045
Epoch 174/300
820/820 [==============================] - 0s 68us/step - loss: 0.1679 - acc: 0.7598 - val_loss: 0.1317 - val_acc: 0.9045
Epoch 175/300
820/820 [==============================] - 0s 73us/step - loss: 0.1679 - acc: 0.7610 - val_loss: 0.1306 - val_acc: 0.9045
Epoch 176/300
820/820 [==============================] - 0s 72us/step - loss: 0.1678 - acc: 0.7585 - val_loss: 0.1303 - val_acc: 0.9045
Epoch 177/300
820/820 [==============================] - 0s 69us/step - loss: 0.1677 - acc: 0.7610 - val_loss: 0.1331 - val_acc: 0.9045
Epoch 178/300
820/820 [==============================] - 0s 67us/step - loss: 0.1679 - acc: 0.7610 - val_loss: 0.1376 - val_acc: 0.8933
Epoch 179/300
820/820 [==============================] - 0s 72us/step - loss: 0.1679 - acc: 0.7573 - val_loss: 0.1367 - val_acc: 0.8933
Epoch 180/300
820/820 [==============================] - 0s 66us/step - loss: 0.1678 - acc: 0.7610 - val_loss: 0.1322 - val_acc: 0.9045
Epoch 181/300
820/820 [==============================] - 0s 68us/step - loss: 0.1676 - acc: 0.7585 - val_loss: 0.1302 - val_acc: 0.9045
Epoch 182/300
820/820 [==============================] - 0s 70us/step - loss: 0.1675 - acc: 0.7585 - val_loss: 0.1315 - val_acc: 0.9045
Epoch 183/300
820/820 [==============================] - 0s 78us/step - loss: 0.1674 - acc: 0.7598 - val_loss: 0.1327 - val_acc: 0.9045
Epoch 184/300
820/820 [==============================] - 0s 68us/step - loss: 0.1674 - acc: 0.7610 - val_loss: 0.1318 - val_acc: 0.9045
Epoch 185/300
820/820 [==============================] - 0s 67us/step - loss: 0.1677 - acc: 0.7585 - val_loss: 0.1296 - val_acc: 0.9045
Epoch 186/300
820/820 [==============================] - 0s 69us/step - loss: 0.1674 - acc: 0.7610 - val_loss: 0.1352 - val_acc: 0.9045
Epoch 187/300
820/820 [==============================] - 0s 73us/step - loss: 0.1673 - acc: 0.7598 - val_loss: 0.1336 - val_acc: 0.9101
Epoch 188/300
820/820 [==============================] - 0s 70us/step - loss: 0.1671 - acc: 0.7610 - val_loss: 0.1314 - val_acc: 0.9045
Epoch 189/300
820/820 [==============================] - 0s 69us/step - loss: 0.1672 - acc: 0.7622 - val_loss: 0.1286 - val_acc: 0.9045
Epoch 190/300
820/820 [==============================] - 0s 68us/step - loss: 0.1671 - acc: 0.7598 - val_loss: 0.1287 - val_acc: 0.9045
Epoch 191/300
820/820 [==============================] - 0s 68us/step - loss: 0.1673 - acc: 0.7598 - val_loss: 0.1273 - val_acc: 0.9101
Epoch 192/300
820/820 [==============================] - 0s 73us/step - loss: 0.1669 - acc: 0.7598 - val_loss: 0.1311 - val_acc: 0.9045
Epoch 193/300
820/820 [==============================] - 0s 69us/step - loss: 0.1669 - acc: 0.7622 - val_loss: 0.1361 - val_acc: 0.8876
Epoch 194/300
820/820 [==============================] - 0s 68us/step - loss: 0.1670 - acc: 0.7610 - val_loss: 0.1320 - val_acc: 0.8989
Epoch 195/300
820/820 [==============================] - 0s 66us/step - loss: 0.1668 - acc: 0.7622 - val_loss: 0.1315 - val_acc: 0.8989
Epoch 196/300
820/820 [==============================] - 0s 75us/step - loss: 0.1667 - acc: 0.7585 - val_loss: 0.1305 - val_acc: 0.8989
Epoch 197/300
820/820 [==============================] - 0s 69us/step - loss: 0.1668 - acc: 0.7610 - val_loss: 0.1296 - val_acc: 0.9045
Epoch 198/300
820/820 [==============================] - 0s 68us/step - loss: 0.1667 - acc: 0.7610 - val_loss: 0.1300 - val_acc: 0.9045
Epoch 199/300
820/820 [==============================] - 0s 68us/step - loss: 0.1667 - acc: 0.7585 - val_loss: 0.1271 - val_acc: 0.9045
Epoch 200/300
820/820 [==============================] - 0s 77us/step - loss: 0.1667 - acc: 0.7610 - val_loss: 0.1263 - val_acc: 0.9101
Epoch 201/300
820/820 [==============================] - 0s 70us/step - loss: 0.1667 - acc: 0.7610 - val_loss: 0.1256 - val_acc: 0.9101
Epoch 202/300
820/820 [==============================] - 0s 67us/step - loss: 0.1666 - acc: 0.7610 - val_loss: 0.1257 - val_acc: 0.9045
Epoch 203/300
820/820 [==============================] - 0s 69us/step - loss: 0.1664 - acc: 0.7598 - val_loss: 0.1278 - val_acc: 0.9101
Epoch 204/300
820/820 [==============================] - 0s 71us/step - loss: 0.1664 - acc: 0.7585 - val_loss: 0.1304 - val_acc: 0.8933
Epoch 205/300
820/820 [==============================] - 0s 70us/step - loss: 0.1663 - acc: 0.7598 - val_loss: 0.1299 - val_acc: 0.8989
Epoch 206/300
820/820 [==============================] - 0s 77us/step - loss: 0.1663 - acc: 0.7610 - val_loss: 0.1274 - val_acc: 0.9101
Epoch 207/300
820/820 [==============================] - 0s 106us/step - loss: 0.1663 - acc: 0.7610 - val_loss: 0.1284 - val_acc: 0.9045
Epoch 208/300
820/820 [==============================] - 0s 84us/step - loss: 0.1662 - acc: 0.7585 - val_loss: 0.1304 - val_acc: 0.8933
Epoch 209/300
820/820 [==============================] - 0s 83us/step - loss: 0.1661 - acc: 0.7610 - val_loss: 0.1308 - val_acc: 0.8933
Epoch 210/300
820/820 [==============================] - 0s 92us/step - loss: 0.1661 - acc: 0.7610 - val_loss: 0.1300 - val_acc: 0.8933
Epoch 211/300
820/820 [==============================] - 0s 79us/step - loss: 0.1660 - acc: 0.7610 - val_loss: 0.1287 - val_acc: 0.9045
Epoch 212/300
820/820 [==============================] - 0s 85us/step - loss: 0.1660 - acc: 0.7598 - val_loss: 0.1288 - val_acc: 0.9045
Epoch 213/300
820/820 [==============================] - 0s 81us/step - loss: 0.1660 - acc: 0.7598 - val_loss: 0.1286 - val_acc: 0.9045
Epoch 214/300
820/820 [==============================] - 0s 85us/step - loss: 0.1660 - acc: 0.7622 - val_loss: 0.1269 - val_acc: 0.9045
Epoch 215/300
820/820 [==============================] - 0s 98us/step - loss: 0.1661 - acc: 0.7610 - val_loss: 0.1302 - val_acc: 0.8933
Epoch 216/300
820/820 [==============================] - 0s 88us/step - loss: 0.1658 - acc: 0.7610 - val_loss: 0.1306 - val_acc: 0.8933
Epoch 217/300
820/820 [==============================] - 0s 99us/step - loss: 0.1658 - acc: 0.7622 - val_loss: 0.1304 - val_acc: 0.8933
Epoch 218/300
820/820 [==============================] - 0s 121us/step - loss: 0.1657 - acc: 0.7634 - val_loss: 0.1321 - val_acc: 0.8933
Epoch 219/300
820/820 [==============================] - 0s 124us/step - loss: 0.1657 - acc: 0.7610 - val_loss: 0.1313 - val_acc: 0.8933
Epoch 220/300
820/820 [==============================] - 0s 141us/step - loss: 0.1657 - acc: 0.7622 - val_loss: 0.1307 - val_acc: 0.8933
Epoch 221/300
820/820 [==============================] - 0s 292us/step - loss: 0.1657 - acc: 0.7610 - val_loss: 0.1308 - val_acc: 0.8933
Epoch 222/300
820/820 [==============================] - 0s 147us/step - loss: 0.1656 - acc: 0.7622 - val_loss: 0.1284 - val_acc: 0.8989
Epoch 223/300
820/820 [==============================] - 0s 146us/step - loss: 0.1656 - acc: 0.7622 - val_loss: 0.1280 - val_acc: 0.8989
Epoch 224/300
820/820 [==============================] - 0s 170us/step - loss: 0.1655 - acc: 0.7610 - val_loss: 0.1284 - val_acc: 0.8989
Epoch 225/300
820/820 [==============================] - 0s 144us/step - loss: 0.1655 - acc: 0.7610 - val_loss: 0.1299 - val_acc: 0.8933
Epoch 226/300
820/820 [==============================] - 0s 119us/step - loss: 0.1654 - acc: 0.7634 - val_loss: 0.1313 - val_acc: 0.8933
Epoch 227/300
820/820 [==============================] - 0s 114us/step - loss: 0.1654 - acc: 0.7622 - val_loss: 0.1295 - val_acc: 0.8989
Epoch 228/300
820/820 [==============================] - ETA: 0s - loss: 0.1723 - acc: 0.745 - 0s 121us/step - loss: 0.1654 - acc: 0.7610 - val_loss: 0.1296 - val_acc: 0.8989
Epoch 229/300
820/820 [==============================] - 0s 103us/step - loss: 0.1654 - acc: 0.7646 - val_loss: 0.1293 - val_acc: 0.8933
Epoch 230/300
820/820 [==============================] - 0s 112us/step - loss: 0.1653 - acc: 0.7610 - val_loss: 0.1289 - val_acc: 0.9045
Epoch 231/300
820/820 [==============================] - 0s 104us/step - loss: 0.1652 - acc: 0.7646 - val_loss: 0.1309 - val_acc: 0.8933
Epoch 232/300
820/820 [==============================] - 0s 90us/step - loss: 0.1652 - acc: 0.7622 - val_loss: 0.1331 - val_acc: 0.9045
Epoch 233/300
820/820 [==============================] - 0s 88us/step - loss: 0.1652 - acc: 0.7634 - val_loss: 0.1324 - val_acc: 0.9045
Epoch 234/300
820/820 [==============================] - 0s 113us/step - loss: 0.1651 - acc: 0.7634 - val_loss: 0.1296 - val_acc: 0.8933
Epoch 235/300
820/820 [==============================] - 0s 105us/step - loss: 0.1651 - acc: 0.7622 - val_loss: 0.1299 - val_acc: 0.8933
Epoch 236/300
820/820 [==============================] - 0s 119us/step - loss: 0.1650 - acc: 0.7622 - val_loss: 0.1273 - val_acc: 0.9045
Epoch 237/300
820/820 [==============================] - 0s 112us/step - loss: 0.1651 - acc: 0.7622 - val_loss: 0.1268 - val_acc: 0.9045
Epoch 238/300
820/820 [==============================] - 0s 107us/step - loss: 0.1650 - acc: 0.7622 - val_loss: 0.1285 - val_acc: 0.9045
Epoch 239/300
820/820 [==============================] - 0s 101us/step - loss: 0.1651 - acc: 0.7646 - val_loss: 0.1318 - val_acc: 0.8989
Epoch 240/300
820/820 [==============================] - 0s 93us/step - loss: 0.1649 - acc: 0.7659 - val_loss: 0.1316 - val_acc: 0.8933
Epoch 241/300
820/820 [==============================] - 0s 124us/step - loss: 0.1649 - acc: 0.7622 - val_loss: 0.1285 - val_acc: 0.9045
Epoch 242/300
820/820 [==============================] - 0s 130us/step - loss: 0.1649 - acc: 0.7622 - val_loss: 0.1271 - val_acc: 0.9045
Epoch 243/300
820/820 [==============================] - 0s 109us/step - loss: 0.1648 - acc: 0.7634 - val_loss: 0.1295 - val_acc: 0.8933
Epoch 244/300
820/820 [==============================] - 0s 118us/step - loss: 0.1648 - acc: 0.7646 - val_loss: 0.1328 - val_acc: 0.8989
Epoch 245/300
820/820 [==============================] - 0s 119us/step - loss: 0.1648 - acc: 0.7659 - val_loss: 0.1341 - val_acc: 0.8933
Epoch 246/300
820/820 [==============================] - 0s 99us/step - loss: 0.1648 - acc: 0.7659 - val_loss: 0.1349 - val_acc: 0.8933
Epoch 247/300
820/820 [==============================] - 0s 118us/step - loss: 0.1649 - acc: 0.7634 - val_loss: 0.1313 - val_acc: 0.8933
Epoch 248/300
820/820 [==============================] - 0s 109us/step - loss: 0.1646 - acc: 0.7646 - val_loss: 0.1303 - val_acc: 0.8876
Epoch 249/300
820/820 [==============================] - 0s 112us/step - loss: 0.1646 - acc: 0.7646 - val_loss: 0.1296 - val_acc: 0.8933
Epoch 250/300
820/820 [==============================] - 0s 117us/step - loss: 0.1645 - acc: 0.7634 - val_loss: 0.1315 - val_acc: 0.8933
Epoch 251/300
820/820 [==============================] - 0s 98us/step - loss: 0.1645 - acc: 0.7646 - val_loss: 0.1294 - val_acc: 0.8933
Epoch 252/300
820/820 [==============================] - 0s 99us/step - loss: 0.1646 - acc: 0.7634 - val_loss: 0.1269 - val_acc: 0.8989
Epoch 253/300
820/820 [==============================] - 0s 105us/step - loss: 0.1645 - acc: 0.7634 - val_loss: 0.1272 - val_acc: 0.8989
Epoch 254/300
820/820 [==============================] - 0s 115us/step - loss: 0.1645 - acc: 0.7634 - val_loss: 0.1284 - val_acc: 0.8933
Epoch 255/300
820/820 [==============================] - 0s 115us/step - loss: 0.1642 - acc: 0.7622 - val_loss: 0.1313 - val_acc: 0.8933
Epoch 256/300
820/820 [==============================] - 0s 116us/step - loss: 0.1645 - acc: 0.7659 - val_loss: 0.1347 - val_acc: 0.8764
Epoch 257/300
820/820 [==============================] - 0s 106us/step - loss: 0.1645 - acc: 0.7634 - val_loss: 0.1349 - val_acc: 0.8820
Epoch 258/300
820/820 [==============================] - 0s 117us/step - loss: 0.1643 - acc: 0.7683 - val_loss: 0.1324 - val_acc: 0.8876
Epoch 259/300
820/820 [==============================] - 0s 110us/step - loss: 0.1642 - acc: 0.7671 - val_loss: 0.1319 - val_acc: 0.8933
Epoch 260/300
820/820 [==============================] - 0s 109us/step - loss: 0.1643 - acc: 0.7646 - val_loss: 0.1299 - val_acc: 0.8933
Epoch 261/300
820/820 [==============================] - 0s 103us/step - loss: 0.1641 - acc: 0.7634 - val_loss: 0.1315 - val_acc: 0.8933
Epoch 262/300
820/820 [==============================] - 0s 106us/step - loss: 0.1641 - acc: 0.7671 - val_loss: 0.1301 - val_acc: 0.8933
Epoch 263/300
820/820 [==============================] - 0s 108us/step - loss: 0.1641 - acc: 0.7646 - val_loss: 0.1280 - val_acc: 0.8933
Epoch 264/300
820/820 [==============================] - 0s 119us/step - loss: 0.1640 - acc: 0.7659 - val_loss: 0.1281 - val_acc: 0.8933
Epoch 265/300
820/820 [==============================] - 0s 123us/step - loss: 0.1640 - acc: 0.7659 - val_loss: 0.1291 - val_acc: 0.8933
Epoch 266/300
820/820 [==============================] - 0s 106us/step - loss: 0.1645 - acc: 0.7646 - val_loss: 0.1373 - val_acc: 0.8764
Epoch 267/300
820/820 [==============================] - 0s 95us/step - loss: 0.1642 - acc: 0.7659 - val_loss: 0.1315 - val_acc: 0.8876
Epoch 268/300
820/820 [==============================] - 0s 91us/step - loss: 0.1640 - acc: 0.7659 - val_loss: 0.1287 - val_acc: 0.8933
Epoch 269/300
820/820 [==============================] - 0s 89us/step - loss: 0.1639 - acc: 0.7634 - val_loss: 0.1307 - val_acc: 0.8933
Epoch 270/300
820/820 [==============================] - 0s 90us/step - loss: 0.1638 - acc: 0.7659 - val_loss: 0.1298 - val_acc: 0.8933
Epoch 271/300
820/820 [==============================] - 0s 104us/step - loss: 0.1638 - acc: 0.7646 - val_loss: 0.1314 - val_acc: 0.8876
Epoch 272/300
820/820 [==============================] - 0s 108us/step - loss: 0.1638 - acc: 0.7671 - val_loss: 0.1324 - val_acc: 0.8820
Epoch 273/300
820/820 [==============================] - 0s 102us/step - loss: 0.1637 - acc: 0.7659 - val_loss: 0.1303 - val_acc: 0.8876
Epoch 274/300
820/820 [==============================] - 0s 115us/step - loss: 0.1637 - acc: 0.7659 - val_loss: 0.1299 - val_acc: 0.8933
Epoch 275/300
820/820 [==============================] - 0s 96us/step - loss: 0.1637 - acc: 0.7659 - val_loss: 0.1292 - val_acc: 0.8933
Epoch 276/300
820/820 [==============================] - 0s 110us/step - loss: 0.1637 - acc: 0.7659 - val_loss: 0.1307 - val_acc: 0.8876
Epoch 277/300
820/820 [==============================] - 0s 115us/step - loss: 0.1636 - acc: 0.7671 - val_loss: 0.1304 - val_acc: 0.8820
Epoch 278/300
820/820 [==============================] - 0s 112us/step - loss: 0.1636 - acc: 0.7683 - val_loss: 0.1310 - val_acc: 0.8820
Epoch 279/300
820/820 [==============================] - 0s 119us/step - loss: 0.1637 - acc: 0.7671 - val_loss: 0.1312 - val_acc: 0.8876
Epoch 280/300
820/820 [==============================] - 0s 114us/step - loss: 0.1636 - acc: 0.7683 - val_loss: 0.1273 - val_acc: 0.8933
Epoch 281/300
820/820 [==============================] - 0s 99us/step - loss: 0.1637 - acc: 0.7659 - val_loss: 0.1263 - val_acc: 0.8989
Epoch 282/300
820/820 [==============================] - 0s 106us/step - loss: 0.1636 - acc: 0.7659 - val_loss: 0.1297 - val_acc: 0.8989
Epoch 283/300
820/820 [==============================] - 0s 113us/step - loss: 0.1634 - acc: 0.7671 - val_loss: 0.1319 - val_acc: 0.8820
Epoch 284/300
820/820 [==============================] - 0s 103us/step - loss: 0.1634 - acc: 0.7671 - val_loss: 0.1340 - val_acc: 0.8820
Epoch 285/300
820/820 [==============================] - 0s 95us/step - loss: 0.1638 - acc: 0.7659 - val_loss: 0.1365 - val_acc: 0.8764
Epoch 286/300
820/820 [==============================] - 0s 93us/step - loss: 0.1635 - acc: 0.7671 - val_loss: 0.1329 - val_acc: 0.8764
Epoch 287/300
820/820 [==============================] - 0s 105us/step - loss: 0.1634 - acc: 0.7671 - val_loss: 0.1300 - val_acc: 0.8989
Epoch 288/300
820/820 [==============================] - 0s 88us/step - loss: 0.1634 - acc: 0.7671 - val_loss: 0.1300 - val_acc: 0.8933
Epoch 289/300
820/820 [==============================] - 0s 97us/step - loss: 0.1633 - acc: 0.7671 - val_loss: 0.1290 - val_acc: 0.8989
Epoch 290/300
820/820 [==============================] - 0s 92us/step - loss: 0.1633 - acc: 0.7683 - val_loss: 0.1266 - val_acc: 0.8989
Epoch 291/300
820/820 [==============================] - 0s 92us/step - loss: 0.1632 - acc: 0.7683 - val_loss: 0.1274 - val_acc: 0.8933
Epoch 292/300
820/820 [==============================] - 0s 104us/step - loss: 0.1632 - acc: 0.7659 - val_loss: 0.1279 - val_acc: 0.8933
Epoch 293/300
820/820 [==============================] - 0s 106us/step - loss: 0.1632 - acc: 0.7671 - val_loss: 0.1300 - val_acc: 0.8764
Epoch 294/300
820/820 [==============================] - 0s 91us/step - loss: 0.1631 - acc: 0.7683 - val_loss: 0.1311 - val_acc: 0.8764
Epoch 295/300
820/820 [==============================] - 0s 92us/step - loss: 0.1632 - acc: 0.7671 - val_loss: 0.1289 - val_acc: 0.8876
Epoch 296/300
820/820 [==============================] - 0s 91us/step - loss: 0.1631 - acc: 0.7671 - val_loss: 0.1281 - val_acc: 0.8876
Epoch 297/300
820/820 [==============================] - 0s 87us/step - loss: 0.1630 - acc: 0.7671 - val_loss: 0.1264 - val_acc: 0.8989
Epoch 298/300
820/820 [==============================] - 0s 90us/step - loss: 0.1633 - acc: 0.7646 - val_loss: 0.1241 - val_acc: 0.9045
Epoch 299/300
820/820 [==============================] - 0s 90us/step - loss: 0.1631 - acc: 0.7646 - val_loss: 0.1275 - val_acc: 0.8989
Epoch 300/300
820/820 [==============================] - 0s 92us/step - loss: 0.1630 - acc: 0.7659 - val_loss: 0.1275 - val_acc: 0.8989
In [57]:
results = test11.evaluate(X_test1, Y_test2)
print(results)
top_layer = test11.layers[0]
plt.title('Visualize First Layer Filter 1')
plt.plot(top_layer.get_weights()[0][:, :, 0].squeeze())
plt.show()
plt.title('Visualize First Layer Filter 2')
plt.plot(top_layer.get_weights()[0][:, :, 1].squeeze())
plt.show()
plt.title('Visualize First Layer Filter 3')
plt.plot(top_layer.get_weights()[0][:, :, 2].squeeze())
plt.show()
plt.title('Visualize First Layer Filter 4')
plt.plot(top_layer.get_weights()[0][:, :, 3].squeeze())
plt.show()
plt.title('Visualize First Layer Filter 5')
plt.plot(top_layer.get_weights()[0][:, :, 4].squeeze())
plt.show()
plt.title('Visualize First Layer Filter 6')
plt.plot(top_layer.get_weights()[0][:, :, 5].squeeze())
plt.show()
177/177 [==============================] - 0s 131us/step
[0.16446652705386533, 0.8305084749130206]

We tried a different loss function, Binary Cross Entropy, on this same model and had a test accuracy of 82%

We tried a smaller filter size. We adjusted the model to have 10 filters of kernal size 10. Our Test Acc was 84.7%

Try with Size 10 filter and two 1D CNN Layers - Acc = 85%

We also tried having 2 1D CNN Layers with filters of length 40 and got an accuracy of 85%

In [58]:
test17 = Sequential()
test17.add(Conv1D(2, (40),
                 activation='relu',
                 input_shape=(100,1)))
test17.add(MaxPooling1D(pool_size=(2)))
test17.add(Conv1D(2, (20), activation='relu'))

test17.add(Flatten())
test17.add(Dense(1, activation = 'sigmoid'))

print(test17.summary())

test17.compile(loss='mean_squared_error', optimizer='Adam',metrics=['accuracy'])
history = test17.fit(X_train1,Y_train2, epochs=900, batch_size=100, validation_data=(X_val1,Y_val2))
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d_3 (Conv1D)            (None, 61, 2)             82        
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 30, 2)             0         
_________________________________________________________________
conv1d_4 (Conv1D)            (None, 11, 2)             82        
_________________________________________________________________
flatten_3 (Flatten)          (None, 22)                0         
_________________________________________________________________
dense_3 (Dense)              (None, 1)                 23        
=================================================================
Total params: 187
Trainable params: 187
Non-trainable params: 0
_________________________________________________________________
None
Train on 820 samples, validate on 178 samples
Epoch 1/900
820/820 [==============================] - 1s 684us/step - loss: 0.2501 - acc: 0.5707 - val_loss: 0.2498 - val_acc: 0.7809
Epoch 2/900
820/820 [==============================] - 0s 102us/step - loss: 0.2500 - acc: 0.5451 - val_loss: 0.2494 - val_acc: 0.7303
Epoch 3/900
820/820 [==============================] - 0s 95us/step - loss: 0.2499 - acc: 0.5280 - val_loss: 0.2491 - val_acc: 0.7247
Epoch 4/900
820/820 [==============================] - 0s 96us/step - loss: 0.2499 - acc: 0.5268 - val_loss: 0.2489 - val_acc: 0.7247
Epoch 5/900
820/820 [==============================] - 0s 99us/step - loss: 0.2498 - acc: 0.5244 - val_loss: 0.2486 - val_acc: 0.7247
Epoch 6/900
820/820 [==============================] - 0s 95us/step - loss: 0.2498 - acc: 0.5268 - val_loss: 0.2483 - val_acc: 0.7247
Epoch 7/900
820/820 [==============================] - 0s 100us/step - loss: 0.2497 - acc: 0.5280 - val_loss: 0.2482 - val_acc: 0.7247
Epoch 8/900
820/820 [==============================] - 0s 101us/step - loss: 0.2496 - acc: 0.5378 - val_loss: 0.2482 - val_acc: 0.7303
Epoch 9/900
820/820 [==============================] - 0s 93us/step - loss: 0.2493 - acc: 0.5463 - val_loss: 0.2479 - val_acc: 0.7360
Epoch 10/900
820/820 [==============================] - 0s 95us/step - loss: 0.2490 - acc: 0.5732 - val_loss: 0.2474 - val_acc: 0.7584
Epoch 11/900
820/820 [==============================] - 0s 97us/step - loss: 0.2484 - acc: 0.6098 - val_loss: 0.2467 - val_acc: 0.7978
Epoch 12/900
820/820 [==============================] - 0s 101us/step - loss: 0.2475 - acc: 0.6317 - val_loss: 0.2459 - val_acc: 0.8202
Epoch 13/900
820/820 [==============================] - 0s 95us/step - loss: 0.2459 - acc: 0.6756 - val_loss: 0.2442 - val_acc: 0.8539
Epoch 14/900
820/820 [==============================] - 0s 95us/step - loss: 0.2436 - acc: 0.6963 - val_loss: 0.2417 - val_acc: 0.8315
Epoch 15/900
820/820 [==============================] - 0s 97us/step - loss: 0.2401 - acc: 0.6976 - val_loss: 0.2399 - val_acc: 0.7697
Epoch 16/900
820/820 [==============================] - 0s 91us/step - loss: 0.2362 - acc: 0.7024 - val_loss: 0.2377 - val_acc: 0.6629
Epoch 17/900
820/820 [==============================] - 0s 95us/step - loss: 0.2317 - acc: 0.6817 - val_loss: 0.2388 - val_acc: 0.5787
Epoch 18/900
820/820 [==============================] - 0s 95us/step - loss: 0.2268 - acc: 0.6549 - val_loss: 0.2427 - val_acc: 0.5281
Epoch 19/900
820/820 [==============================] - 0s 96us/step - loss: 0.2230 - acc: 0.6329 - val_loss: 0.2453 - val_acc: 0.5169
Epoch 20/900
820/820 [==============================] - 0s 100us/step - loss: 0.2194 - acc: 0.6537 - val_loss: 0.2396 - val_acc: 0.5787
Epoch 21/900
820/820 [==============================] - 0s 98us/step - loss: 0.2164 - acc: 0.6610 - val_loss: 0.2382 - val_acc: 0.5787
Epoch 22/900
820/820 [==============================] - 0s 94us/step - loss: 0.2140 - acc: 0.6659 - val_loss: 0.2332 - val_acc: 0.6011
Epoch 23/900
820/820 [==============================] - 0s 98us/step - loss: 0.2120 - acc: 0.6756 - val_loss: 0.2339 - val_acc: 0.5955
Epoch 24/900
820/820 [==============================] - 0s 102us/step - loss: 0.2101 - acc: 0.6756 - val_loss: 0.2319 - val_acc: 0.6124
Epoch 25/900
820/820 [==============================] - 0s 94us/step - loss: 0.2085 - acc: 0.6841 - val_loss: 0.2320 - val_acc: 0.6124
Epoch 26/900
820/820 [==============================] - 0s 97us/step - loss: 0.2070 - acc: 0.6878 - val_loss: 0.2257 - val_acc: 0.6517
Epoch 27/900
820/820 [==============================] - 0s 97us/step - loss: 0.2057 - acc: 0.6963 - val_loss: 0.2268 - val_acc: 0.6348
Epoch 28/900
820/820 [==============================] - 0s 94us/step - loss: 0.2048 - acc: 0.6915 - val_loss: 0.2261 - val_acc: 0.6404
Epoch 29/900
820/820 [==============================] - 0s 96us/step - loss: 0.2039 - acc: 0.6976 - val_loss: 0.2156 - val_acc: 0.7416
Epoch 30/900
820/820 [==============================] - 0s 95us/step - loss: 0.2029 - acc: 0.6963 - val_loss: 0.2200 - val_acc: 0.7022
Epoch 31/900
820/820 [==============================] - 0s 94us/step - loss: 0.2018 - acc: 0.7061 - val_loss: 0.2240 - val_acc: 0.6461
Epoch 32/900
820/820 [==============================] - 0s 104us/step - loss: 0.2011 - acc: 0.7061 - val_loss: 0.2193 - val_acc: 0.6966
Epoch 33/900
820/820 [==============================] - 0s 108us/step - loss: 0.2004 - acc: 0.7061 - val_loss: 0.2162 - val_acc: 0.7135
Epoch 34/900
820/820 [==============================] - 0s 94us/step - loss: 0.1995 - acc: 0.7073 - val_loss: 0.2160 - val_acc: 0.7135
Epoch 35/900
820/820 [==============================] - 0s 95us/step - loss: 0.1988 - acc: 0.7146 - val_loss: 0.2146 - val_acc: 0.7303
Epoch 36/900
820/820 [==============================] - 0s 98us/step - loss: 0.1981 - acc: 0.7195 - val_loss: 0.2122 - val_acc: 0.7528
Epoch 37/900
820/820 [==============================] - 0s 94us/step - loss: 0.1975 - acc: 0.7098 - val_loss: 0.2148 - val_acc: 0.7303
Epoch 38/900
820/820 [==============================] - 0s 95us/step - loss: 0.1968 - acc: 0.7195 - val_loss: 0.2033 - val_acc: 0.8258
Epoch 39/900
820/820 [==============================] - 0s 95us/step - loss: 0.1963 - acc: 0.7232 - val_loss: 0.2047 - val_acc: 0.8146
Epoch 40/900
820/820 [==============================] - 0s 95us/step - loss: 0.1955 - acc: 0.7122 - val_loss: 0.2070 - val_acc: 0.7978
Epoch 41/900
820/820 [==============================] - 0s 97us/step - loss: 0.1950 - acc: 0.7146 - val_loss: 0.2045 - val_acc: 0.8090
Epoch 42/900
820/820 [==============================] - 0s 96us/step - loss: 0.1945 - acc: 0.7183 - val_loss: 0.2049 - val_acc: 0.8090
Epoch 43/900
820/820 [==============================] - 0s 96us/step - loss: 0.1939 - acc: 0.7183 - val_loss: 0.2062 - val_acc: 0.7978
Epoch 44/900
820/820 [==============================] - 0s 95us/step - loss: 0.1934 - acc: 0.7232 - val_loss: 0.2055 - val_acc: 0.7978
Epoch 45/900
820/820 [==============================] - 0s 102us/step - loss: 0.1928 - acc: 0.7329 - val_loss: 0.1970 - val_acc: 0.8539
Epoch 46/900
820/820 [==============================] - 0s 99us/step - loss: 0.1928 - acc: 0.7390 - val_loss: 0.1927 - val_acc: 0.8708
Epoch 47/900
820/820 [==============================] - 0s 94us/step - loss: 0.1919 - acc: 0.7390 - val_loss: 0.2036 - val_acc: 0.8146
Epoch 48/900
820/820 [==============================] - 0s 97us/step - loss: 0.1915 - acc: 0.7341 - val_loss: 0.1982 - val_acc: 0.8483
Epoch 49/900
820/820 [==============================] - 0s 129us/step - loss: 0.1912 - acc: 0.7305 - val_loss: 0.2031 - val_acc: 0.8146
Epoch 50/900
820/820 [==============================] - 0s 150us/step - loss: 0.1912 - acc: 0.7415 - val_loss: 0.1866 - val_acc: 0.8820
Epoch 51/900
820/820 [==============================] - 0s 157us/step - loss: 0.1903 - acc: 0.7463 - val_loss: 0.1933 - val_acc: 0.8652
Epoch 52/900
820/820 [==============================] - 0s 121us/step - loss: 0.1898 - acc: 0.7463 - val_loss: 0.1994 - val_acc: 0.8258
Epoch 53/900
820/820 [==============================] - 0s 99us/step - loss: 0.1892 - acc: 0.7451 - val_loss: 0.1971 - val_acc: 0.8315
Epoch 54/900
820/820 [==============================] - 0s 98us/step - loss: 0.1888 - acc: 0.7451 - val_loss: 0.1964 - val_acc: 0.8427
Epoch 55/900
820/820 [==============================] - 0s 100us/step - loss: 0.1884 - acc: 0.7488 - val_loss: 0.1972 - val_acc: 0.8315
Epoch 56/900
820/820 [==============================] - 0s 99us/step - loss: 0.1881 - acc: 0.7463 - val_loss: 0.1986 - val_acc: 0.8315
Epoch 57/900
820/820 [==============================] - 0s 96us/step - loss: 0.1877 - acc: 0.7488 - val_loss: 0.1896 - val_acc: 0.8652
Epoch 58/900
820/820 [==============================] - 0s 97us/step - loss: 0.1873 - acc: 0.7476 - val_loss: 0.1917 - val_acc: 0.8652
Epoch 59/900
820/820 [==============================] - 0s 97us/step - loss: 0.1871 - acc: 0.7488 - val_loss: 0.1935 - val_acc: 0.8539
Epoch 60/900
820/820 [==============================] - 0s 96us/step - loss: 0.1865 - acc: 0.7524 - val_loss: 0.1871 - val_acc: 0.8652
Epoch 61/900
820/820 [==============================] - 0s 94us/step - loss: 0.1864 - acc: 0.7500 - val_loss: 0.1841 - val_acc: 0.8764
Epoch 62/900
820/820 [==============================] - 0s 105us/step - loss: 0.1863 - acc: 0.7476 - val_loss: 0.1825 - val_acc: 0.8820
Epoch 63/900
820/820 [==============================] - 0s 97us/step - loss: 0.1857 - acc: 0.7476 - val_loss: 0.1871 - val_acc: 0.8652
Epoch 64/900
820/820 [==============================] - 0s 96us/step - loss: 0.1858 - acc: 0.7439 - val_loss: 0.1933 - val_acc: 0.8539
Epoch 65/900
820/820 [==============================] - 0s 103us/step - loss: 0.1851 - acc: 0.7512 - val_loss: 0.1798 - val_acc: 0.8820
Epoch 66/900
820/820 [==============================] - 0s 99us/step - loss: 0.1849 - acc: 0.7512 - val_loss: 0.1917 - val_acc: 0.8539
Epoch 67/900
820/820 [==============================] - 0s 106us/step - loss: 0.1849 - acc: 0.7463 - val_loss: 0.1904 - val_acc: 0.8539
Epoch 68/900
820/820 [==============================] - 0s 104us/step - loss: 0.1845 - acc: 0.7537 - val_loss: 0.1857 - val_acc: 0.8652
Epoch 69/900
820/820 [==============================] - 0s 97us/step - loss: 0.1838 - acc: 0.7549 - val_loss: 0.1880 - val_acc: 0.8596
Epoch 70/900
820/820 [==============================] - 0s 128us/step - loss: 0.1838 - acc: 0.7537 - val_loss: 0.1863 - val_acc: 0.8596
Epoch 71/900
820/820 [==============================] - 0s 111us/step - loss: 0.1832 - acc: 0.7524 - val_loss: 0.1800 - val_acc: 0.8764
Epoch 72/900
820/820 [==============================] - 0s 108us/step - loss: 0.1831 - acc: 0.7524 - val_loss: 0.1834 - val_acc: 0.8652
Epoch 73/900
820/820 [==============================] - 0s 110us/step - loss: 0.1829 - acc: 0.7561 - val_loss: 0.1889 - val_acc: 0.8539
Epoch 74/900
820/820 [==============================] - 0s 118us/step - loss: 0.1831 - acc: 0.7500 - val_loss: 0.1861 - val_acc: 0.8596
Epoch 75/900
820/820 [==============================] - 0s 106us/step - loss: 0.1828 - acc: 0.7500 - val_loss: 0.1778 - val_acc: 0.8764
Epoch 76/900
820/820 [==============================] - 0s 153us/step - loss: 0.1824 - acc: 0.7512 - val_loss: 0.1788 - val_acc: 0.8708
Epoch 77/900
820/820 [==============================] - 0s 106us/step - loss: 0.1819 - acc: 0.7549 - val_loss: 0.1831 - val_acc: 0.8596
Epoch 78/900
820/820 [==============================] - 0s 106us/step - loss: 0.1816 - acc: 0.7524 - val_loss: 0.1801 - val_acc: 0.8708
Epoch 79/900
820/820 [==============================] - 0s 105us/step - loss: 0.1814 - acc: 0.7561 - val_loss: 0.1791 - val_acc: 0.8708
Epoch 80/900
820/820 [==============================] - 0s 113us/step - loss: 0.1812 - acc: 0.7561 - val_loss: 0.1818 - val_acc: 0.8652
Epoch 81/900
820/820 [==============================] - 0s 99us/step - loss: 0.1812 - acc: 0.7549 - val_loss: 0.1852 - val_acc: 0.8596
Epoch 82/900
820/820 [==============================] - 0s 146us/step - loss: 0.1809 - acc: 0.7537 - val_loss: 0.1780 - val_acc: 0.8708
Epoch 83/900
820/820 [==============================] - 0s 113us/step - loss: 0.1807 - acc: 0.7537 - val_loss: 0.1719 - val_acc: 0.8652
Epoch 84/900
820/820 [==============================] - 0s 108us/step - loss: 0.1806 - acc: 0.7524 - val_loss: 0.1755 - val_acc: 0.8652
Epoch 85/900
820/820 [==============================] - 0s 95us/step - loss: 0.1804 - acc: 0.7561 - val_loss: 0.1820 - val_acc: 0.8596
Epoch 86/900
820/820 [==============================] - 0s 100us/step - loss: 0.1800 - acc: 0.7537 - val_loss: 0.1777 - val_acc: 0.8596
Epoch 87/900
820/820 [==============================] - 0s 111us/step - loss: 0.1800 - acc: 0.7573 - val_loss: 0.1739 - val_acc: 0.8708
Epoch 88/900
820/820 [==============================] - 0s 94us/step - loss: 0.1796 - acc: 0.7537 - val_loss: 0.1801 - val_acc: 0.8596
Epoch 89/900
820/820 [==============================] - 0s 100us/step - loss: 0.1800 - acc: 0.7537 - val_loss: 0.1827 - val_acc: 0.8596
Epoch 90/900
820/820 [==============================] - 0s 104us/step - loss: 0.1801 - acc: 0.7488 - val_loss: 0.1732 - val_acc: 0.8596
Epoch 91/900
820/820 [==============================] - 0s 91us/step - loss: 0.1793 - acc: 0.7512 - val_loss: 0.1864 - val_acc: 0.8539
Epoch 92/900
820/820 [==============================] - 0s 96us/step - loss: 0.1792 - acc: 0.7549 - val_loss: 0.1754 - val_acc: 0.8652
Epoch 93/900
820/820 [==============================] - 0s 98us/step - loss: 0.1789 - acc: 0.7537 - val_loss: 0.1709 - val_acc: 0.8708
Epoch 94/900
820/820 [==============================] - 0s 94us/step - loss: 0.1791 - acc: 0.7537 - val_loss: 0.1705 - val_acc: 0.8708
Epoch 95/900
820/820 [==============================] - 0s 98us/step - loss: 0.1787 - acc: 0.7537 - val_loss: 0.1791 - val_acc: 0.8539
Epoch 96/900
820/820 [==============================] - 0s 96us/step - loss: 0.1793 - acc: 0.7524 - val_loss: 0.1816 - val_acc: 0.8483
Epoch 97/900
820/820 [==============================] - 0s 105us/step - loss: 0.1791 - acc: 0.7561 - val_loss: 0.1643 - val_acc: 0.8708
Epoch 98/900
820/820 [==============================] - 0s 94us/step - loss: 0.1787 - acc: 0.7537 - val_loss: 0.1730 - val_acc: 0.8596
Epoch 99/900
820/820 [==============================] - 0s 97us/step - loss: 0.1781 - acc: 0.7561 - val_loss: 0.1790 - val_acc: 0.8483
Epoch 100/900
820/820 [==============================] - 0s 96us/step - loss: 0.1783 - acc: 0.7549 - val_loss: 0.1822 - val_acc: 0.8483
Epoch 101/900
820/820 [==============================] - 0s 95us/step - loss: 0.1781 - acc: 0.7549 - val_loss: 0.1711 - val_acc: 0.8596
Epoch 102/900
820/820 [==============================] - 0s 100us/step - loss: 0.1778 - acc: 0.7549 - val_loss: 0.1726 - val_acc: 0.8596
Epoch 103/900
820/820 [==============================] - 0s 98us/step - loss: 0.1779 - acc: 0.7549 - val_loss: 0.1668 - val_acc: 0.8708
Epoch 104/900
820/820 [==============================] - 0s 94us/step - loss: 0.1776 - acc: 0.7573 - val_loss: 0.1764 - val_acc: 0.8539
Epoch 105/900
820/820 [==============================] - 0s 99us/step - loss: 0.1773 - acc: 0.7573 - val_loss: 0.1732 - val_acc: 0.8596
Epoch 106/900
820/820 [==============================] - 0s 100us/step - loss: 0.1773 - acc: 0.7598 - val_loss: 0.1735 - val_acc: 0.8652
Epoch 107/900
820/820 [==============================] - 0s 101us/step - loss: 0.1770 - acc: 0.7561 - val_loss: 0.1731 - val_acc: 0.8652
Epoch 108/900
820/820 [==============================] - 0s 97us/step - loss: 0.1769 - acc: 0.7573 - val_loss: 0.1744 - val_acc: 0.8652
Epoch 109/900
820/820 [==============================] - 0s 105us/step - loss: 0.1770 - acc: 0.7549 - val_loss: 0.1776 - val_acc: 0.8483
Epoch 110/900
820/820 [==============================] - 0s 98us/step - loss: 0.1769 - acc: 0.7549 - val_loss: 0.1698 - val_acc: 0.8652
Epoch 111/900
820/820 [==============================] - 0s 100us/step - loss: 0.1766 - acc: 0.7573 - val_loss: 0.1770 - val_acc: 0.8483
Epoch 112/900
820/820 [==============================] - 0s 96us/step - loss: 0.1765 - acc: 0.7549 - val_loss: 0.1714 - val_acc: 0.8596
Epoch 113/900
820/820 [==============================] - 0s 93us/step - loss: 0.1764 - acc: 0.7561 - val_loss: 0.1732 - val_acc: 0.8652
Epoch 114/900
820/820 [==============================] - 0s 99us/step - loss: 0.1763 - acc: 0.7573 - val_loss: 0.1762 - val_acc: 0.8539
Epoch 115/900
820/820 [==============================] - 0s 95us/step - loss: 0.1764 - acc: 0.7537 - val_loss: 0.1750 - val_acc: 0.8539
Epoch 116/900
820/820 [==============================] - 0s 98us/step - loss: 0.1761 - acc: 0.7573 - val_loss: 0.1696 - val_acc: 0.8596
Epoch 117/900
820/820 [==============================] - 0s 96us/step - loss: 0.1761 - acc: 0.7561 - val_loss: 0.1720 - val_acc: 0.8596
Epoch 118/900
820/820 [==============================] - 0s 95us/step - loss: 0.1761 - acc: 0.7561 - val_loss: 0.1735 - val_acc: 0.8596
Epoch 119/900
820/820 [==============================] - 0s 97us/step - loss: 0.1760 - acc: 0.7573 - val_loss: 0.1713 - val_acc: 0.8596
Epoch 120/900
820/820 [==============================] - 0s 94us/step - loss: 0.1760 - acc: 0.7549 - val_loss: 0.1749 - val_acc: 0.8539
Epoch 121/900
820/820 [==============================] - 0s 206us/step - loss: 0.1757 - acc: 0.7561 - val_loss: 0.1705 - val_acc: 0.8596
Epoch 122/900
820/820 [==============================] - 0s 148us/step - loss: 0.1758 - acc: 0.7537 - val_loss: 0.1798 - val_acc: 0.8483
Epoch 123/900
820/820 [==============================] - 0s 106us/step - loss: 0.1756 - acc: 0.7573 - val_loss: 0.1688 - val_acc: 0.8596
Epoch 124/900
820/820 [==============================] - 0s 124us/step - loss: 0.1756 - acc: 0.7549 - val_loss: 0.1676 - val_acc: 0.8652
Epoch 125/900
820/820 [==============================] - 0s 98us/step - loss: 0.1754 - acc: 0.7549 - val_loss: 0.1771 - val_acc: 0.8483
Epoch 126/900
820/820 [==============================] - 0s 93us/step - loss: 0.1757 - acc: 0.7561 - val_loss: 0.1752 - val_acc: 0.8483
Epoch 127/900
820/820 [==============================] - 0s 94us/step - loss: 0.1752 - acc: 0.7598 - val_loss: 0.1673 - val_acc: 0.8652
Epoch 128/900
820/820 [==============================] - 0s 105us/step - loss: 0.1755 - acc: 0.7561 - val_loss: 0.1675 - val_acc: 0.8652
Epoch 129/900
820/820 [==============================] - 0s 97us/step - loss: 0.1757 - acc: 0.7585 - val_loss: 0.1818 - val_acc: 0.8483
Epoch 130/900
820/820 [==============================] - 0s 92us/step - loss: 0.1754 - acc: 0.7549 - val_loss: 0.1686 - val_acc: 0.8596
Epoch 131/900
820/820 [==============================] - 0s 100us/step - loss: 0.1750 - acc: 0.7561 - val_loss: 0.1709 - val_acc: 0.8539
Epoch 132/900
820/820 [==============================] - 0s 100us/step - loss: 0.1751 - acc: 0.7573 - val_loss: 0.1651 - val_acc: 0.8652
Epoch 133/900
820/820 [==============================] - 0s 97us/step - loss: 0.1754 - acc: 0.7549 - val_loss: 0.1726 - val_acc: 0.8539
Epoch 134/900
820/820 [==============================] - 0s 100us/step - loss: 0.1753 - acc: 0.7561 - val_loss: 0.1597 - val_acc: 0.8708
Epoch 135/900
820/820 [==============================] - 0s 96us/step - loss: 0.1749 - acc: 0.7561 - val_loss: 0.1713 - val_acc: 0.8539
Epoch 136/900
820/820 [==============================] - 0s 96us/step - loss: 0.1753 - acc: 0.7524 - val_loss: 0.1686 - val_acc: 0.8596
Epoch 137/900
820/820 [==============================] - 0s 98us/step - loss: 0.1765 - acc: 0.7500 - val_loss: 0.1499 - val_acc: 0.8708
Epoch 138/900
820/820 [==============================] - 0s 91us/step - loss: 0.1764 - acc: 0.7488 - val_loss: 0.1574 - val_acc: 0.8708
Epoch 139/900
820/820 [==============================] - 0s 95us/step - loss: 0.1746 - acc: 0.7598 - val_loss: 0.1811 - val_acc: 0.8483
Epoch 140/900
820/820 [==============================] - 0s 96us/step - loss: 0.1750 - acc: 0.7561 - val_loss: 0.1697 - val_acc: 0.8539
Epoch 141/900
820/820 [==============================] - 0s 105us/step - loss: 0.1744 - acc: 0.7573 - val_loss: 0.1643 - val_acc: 0.8596
Epoch 142/900
820/820 [==============================] - 0s 93us/step - loss: 0.1748 - acc: 0.7537 - val_loss: 0.1594 - val_acc: 0.8708
Epoch 143/900
820/820 [==============================] - 0s 97us/step - loss: 0.1743 - acc: 0.7598 - val_loss: 0.1745 - val_acc: 0.8483
Epoch 144/900
820/820 [==============================] - 0s 96us/step - loss: 0.1746 - acc: 0.7549 - val_loss: 0.1768 - val_acc: 0.8483
Epoch 145/900
820/820 [==============================] - 0s 94us/step - loss: 0.1740 - acc: 0.7585 - val_loss: 0.1612 - val_acc: 0.8596
Epoch 146/900
820/820 [==============================] - 0s 96us/step - loss: 0.1745 - acc: 0.7549 - val_loss: 0.1688 - val_acc: 0.8596
Epoch 147/900
820/820 [==============================] - 0s 95us/step - loss: 0.1742 - acc: 0.7549 - val_loss: 0.1721 - val_acc: 0.8483
Epoch 148/900
820/820 [==============================] - 0s 93us/step - loss: 0.1741 - acc: 0.7585 - val_loss: 0.1675 - val_acc: 0.8652
Epoch 149/900
820/820 [==============================] - 0s 95us/step - loss: 0.1742 - acc: 0.7573 - val_loss: 0.1780 - val_acc: 0.8483
Epoch 150/900
820/820 [==============================] - 0s 104us/step - loss: 0.1739 - acc: 0.7537 - val_loss: 0.1644 - val_acc: 0.8596
Epoch 151/900
820/820 [==============================] - 0s 97us/step - loss: 0.1744 - acc: 0.7537 - val_loss: 0.1577 - val_acc: 0.8708
Epoch 152/900
820/820 [==============================] - 0s 96us/step - loss: 0.1742 - acc: 0.7573 - val_loss: 0.1689 - val_acc: 0.8539
Epoch 153/900
820/820 [==============================] - 0s 98us/step - loss: 0.1737 - acc: 0.7585 - val_loss: 0.1712 - val_acc: 0.8483
Epoch 154/900
820/820 [==============================] - 0s 156us/step - loss: 0.1738 - acc: 0.7561 - val_loss: 0.1711 - val_acc: 0.8483
Epoch 155/900
820/820 [==============================] - 0s 121us/step - loss: 0.1736 - acc: 0.7573 - val_loss: 0.1658 - val_acc: 0.8652
Epoch 156/900
820/820 [==============================] - 0s 106us/step - loss: 0.1740 - acc: 0.7585 - val_loss: 0.1611 - val_acc: 0.8596
Epoch 157/900
820/820 [==============================] - 0s 113us/step - loss: 0.1736 - acc: 0.7561 - val_loss: 0.1740 - val_acc: 0.8539
Epoch 158/900
820/820 [==============================] - 0s 115us/step - loss: 0.1741 - acc: 0.7537 - val_loss: 0.1696 - val_acc: 0.8483
Epoch 159/900
820/820 [==============================] - 0s 262us/step - loss: 0.1741 - acc: 0.7524 - val_loss: 0.1587 - val_acc: 0.8596
Epoch 160/900
820/820 [==============================] - 0s 174us/step - loss: 0.1734 - acc: 0.7598 - val_loss: 0.1700 - val_acc: 0.8483
Epoch 161/900
820/820 [==============================] - 0s 121us/step - loss: 0.1737 - acc: 0.7549 - val_loss: 0.1747 - val_acc: 0.8483
Epoch 162/900
820/820 [==============================] - 0s 110us/step - loss: 0.1735 - acc: 0.7610 - val_loss: 0.1653 - val_acc: 0.8539
Epoch 163/900
820/820 [==============================] - 0s 144us/step - loss: 0.1734 - acc: 0.7585 - val_loss: 0.1644 - val_acc: 0.8596
Epoch 164/900
820/820 [==============================] - 0s 148us/step - loss: 0.1733 - acc: 0.7585 - val_loss: 0.1634 - val_acc: 0.8596
Epoch 165/900
820/820 [==============================] - 0s 106us/step - loss: 0.1734 - acc: 0.7610 - val_loss: 0.1672 - val_acc: 0.8596
Epoch 166/900
820/820 [==============================] - 0s 103us/step - loss: 0.1732 - acc: 0.7610 - val_loss: 0.1662 - val_acc: 0.8596
Epoch 167/900
820/820 [==============================] - 0s 234us/step - loss: 0.1734 - acc: 0.7561 - val_loss: 0.1696 - val_acc: 0.8483
Epoch 168/900
820/820 [==============================] - 0s 100us/step - loss: 0.1730 - acc: 0.7610 - val_loss: 0.1646 - val_acc: 0.8652
Epoch 169/900
820/820 [==============================] - 0s 140us/step - loss: 0.1731 - acc: 0.7598 - val_loss: 0.1647 - val_acc: 0.8652
Epoch 170/900
820/820 [==============================] - 0s 151us/step - loss: 0.1731 - acc: 0.7610 - val_loss: 0.1644 - val_acc: 0.8652
Epoch 171/900
820/820 [==============================] - 0s 111us/step - loss: 0.1732 - acc: 0.7561 - val_loss: 0.1703 - val_acc: 0.8483
Epoch 172/900
820/820 [==============================] - 0s 117us/step - loss: 0.1729 - acc: 0.7610 - val_loss: 0.1617 - val_acc: 0.8596
Epoch 173/900
820/820 [==============================] - 0s 114us/step - loss: 0.1730 - acc: 0.7610 - val_loss: 0.1673 - val_acc: 0.8539
Epoch 174/900
820/820 [==============================] - 0s 98us/step - loss: 0.1730 - acc: 0.7573 - val_loss: 0.1706 - val_acc: 0.8483
Epoch 175/900
820/820 [==============================] - 0s 95us/step - loss: 0.1729 - acc: 0.7585 - val_loss: 0.1653 - val_acc: 0.8652
Epoch 176/900
820/820 [==============================] - 0s 108us/step - loss: 0.1729 - acc: 0.7610 - val_loss: 0.1635 - val_acc: 0.8596
Epoch 177/900
820/820 [==============================] - 0s 115us/step - loss: 0.1731 - acc: 0.7598 - val_loss: 0.1609 - val_acc: 0.8596
Epoch 178/900
820/820 [==============================] - 0s 115us/step - loss: 0.1728 - acc: 0.7610 - val_loss: 0.1693 - val_acc: 0.8483
Epoch 179/900
820/820 [==============================] - 0s 99us/step - loss: 0.1731 - acc: 0.7598 - val_loss: 0.1632 - val_acc: 0.8652
Epoch 180/900
820/820 [==============================] - 0s 105us/step - loss: 0.1727 - acc: 0.7610 - val_loss: 0.1676 - val_acc: 0.8483
Epoch 181/900
820/820 [==============================] - 0s 109us/step - loss: 0.1730 - acc: 0.7598 - val_loss: 0.1596 - val_acc: 0.8596
Epoch 182/900
820/820 [==============================] - 0s 104us/step - loss: 0.1727 - acc: 0.7561 - val_loss: 0.1685 - val_acc: 0.8483
Epoch 183/900
820/820 [==============================] - 0s 103us/step - loss: 0.1727 - acc: 0.7598 - val_loss: 0.1616 - val_acc: 0.8596
Epoch 184/900
820/820 [==============================] - 0s 104us/step - loss: 0.1728 - acc: 0.7598 - val_loss: 0.1691 - val_acc: 0.8483
Epoch 185/900
820/820 [==============================] - 0s 100us/step - loss: 0.1727 - acc: 0.7561 - val_loss: 0.1663 - val_acc: 0.8539
Epoch 186/900
820/820 [==============================] - 0s 97us/step - loss: 0.1727 - acc: 0.7598 - val_loss: 0.1682 - val_acc: 0.8539
Epoch 187/900
820/820 [==============================] - 0s 103us/step - loss: 0.1727 - acc: 0.7573 - val_loss: 0.1656 - val_acc: 0.8596
Epoch 188/900
820/820 [==============================] - 0s 102us/step - loss: 0.1727 - acc: 0.7573 - val_loss: 0.1542 - val_acc: 0.8652
Epoch 189/900
820/820 [==============================] - 0s 105us/step - loss: 0.1733 - acc: 0.7537 - val_loss: 0.1571 - val_acc: 0.8596
Epoch 190/900
820/820 [==============================] - 0s 101us/step - loss: 0.1729 - acc: 0.7561 - val_loss: 0.1595 - val_acc: 0.8596
Epoch 191/900
820/820 [==============================] - 0s 96us/step - loss: 0.1727 - acc: 0.7585 - val_loss: 0.1655 - val_acc: 0.8539
Epoch 192/900
820/820 [==============================] - 0s 95us/step - loss: 0.1729 - acc: 0.7537 - val_loss: 0.1730 - val_acc: 0.8427
Epoch 193/900
820/820 [==============================] - 0s 96us/step - loss: 0.1726 - acc: 0.7585 - val_loss: 0.1632 - val_acc: 0.8652
Epoch 194/900
820/820 [==============================] - 0s 101us/step - loss: 0.1725 - acc: 0.7585 - val_loss: 0.1604 - val_acc: 0.8596
Epoch 195/900
820/820 [==============================] - 0s 97us/step - loss: 0.1723 - acc: 0.7622 - val_loss: 0.1695 - val_acc: 0.8483
Epoch 196/900
820/820 [==============================] - 0s 113us/step - loss: 0.1748 - acc: 0.7500 - val_loss: 0.1766 - val_acc: 0.8427
Epoch 197/900
820/820 [==============================] - 0s 103us/step - loss: 0.1730 - acc: 0.7561 - val_loss: 0.1536 - val_acc: 0.8708
Epoch 198/900
820/820 [==============================] - 0s 100us/step - loss: 0.1730 - acc: 0.7561 - val_loss: 0.1619 - val_acc: 0.8652
Epoch 199/900
820/820 [==============================] - 0s 104us/step - loss: 0.1723 - acc: 0.7610 - val_loss: 0.1677 - val_acc: 0.8483
Epoch 200/900
820/820 [==============================] - 0s 101us/step - loss: 0.1725 - acc: 0.7549 - val_loss: 0.1728 - val_acc: 0.8427
Epoch 201/900
820/820 [==============================] - 0s 107us/step - loss: 0.1726 - acc: 0.7537 - val_loss: 0.1654 - val_acc: 0.8539
Epoch 202/900
820/820 [==============================] - 0s 108us/step - loss: 0.1729 - acc: 0.7561 - val_loss: 0.1580 - val_acc: 0.8596
Epoch 203/900
820/820 [==============================] - 0s 102us/step - loss: 0.1725 - acc: 0.7573 - val_loss: 0.1702 - val_acc: 0.8427
Epoch 204/900
820/820 [==============================] - 0s 103us/step - loss: 0.1726 - acc: 0.7598 - val_loss: 0.1658 - val_acc: 0.8539
Epoch 205/900
820/820 [==============================] - 0s 135us/step - loss: 0.1725 - acc: 0.7585 - val_loss: 0.1589 - val_acc: 0.8652
Epoch 206/900
820/820 [==============================] - 0s 152us/step - loss: 0.1724 - acc: 0.7585 - val_loss: 0.1691 - val_acc: 0.8427
Epoch 207/900
820/820 [==============================] - 0s 177us/step - loss: 0.1722 - acc: 0.7561 - val_loss: 0.1678 - val_acc: 0.8483
Epoch 208/900
820/820 [==============================] - 0s 147us/step - loss: 0.1724 - acc: 0.7610 - val_loss: 0.1616 - val_acc: 0.8652
Epoch 209/900
820/820 [==============================] - 0s 140us/step - loss: 0.1723 - acc: 0.7549 - val_loss: 0.1740 - val_acc: 0.8427
Epoch 210/900
820/820 [==============================] - 0s 167us/step - loss: 0.1723 - acc: 0.7573 - val_loss: 0.1627 - val_acc: 0.8596
Epoch 211/900
820/820 [==============================] - 0s 132us/step - loss: 0.1724 - acc: 0.7585 - val_loss: 0.1579 - val_acc: 0.8596
Epoch 212/900
820/820 [==============================] - 0s 140us/step - loss: 0.1721 - acc: 0.7573 - val_loss: 0.1662 - val_acc: 0.8483
Epoch 213/900
820/820 [==============================] - 0s 143us/step - loss: 0.1721 - acc: 0.7573 - val_loss: 0.1681 - val_acc: 0.8483
Epoch 214/900
820/820 [==============================] - 0s 145us/step - loss: 0.1720 - acc: 0.7598 - val_loss: 0.1638 - val_acc: 0.8596
Epoch 215/900
820/820 [==============================] - 0s 165us/step - loss: 0.1721 - acc: 0.7622 - val_loss: 0.1631 - val_acc: 0.8596
Epoch 216/900
820/820 [==============================] - 0s 148us/step - loss: 0.1720 - acc: 0.7585 - val_loss: 0.1644 - val_acc: 0.8596
Epoch 217/900
820/820 [==============================] - 0s 143us/step - loss: 0.1722 - acc: 0.7549 - val_loss: 0.1662 - val_acc: 0.8539
Epoch 218/900
820/820 [==============================] - 0s 154us/step - loss: 0.1732 - acc: 0.7573 - val_loss: 0.1475 - val_acc: 0.8764
Epoch 219/900
820/820 [==============================] - 0s 141us/step - loss: 0.1731 - acc: 0.7561 - val_loss: 0.1594 - val_acc: 0.8596
Epoch 220/900
820/820 [==============================] - 0s 135us/step - loss: 0.1721 - acc: 0.7585 - val_loss: 0.1734 - val_acc: 0.8427
Epoch 221/900
820/820 [==============================] - 0s 124us/step - loss: 0.1719 - acc: 0.7573 - val_loss: 0.1616 - val_acc: 0.8596
Epoch 222/900
820/820 [==============================] - 0s 122us/step - loss: 0.1720 - acc: 0.7561 - val_loss: 0.1581 - val_acc: 0.8708
Epoch 223/900
820/820 [==============================] - 0s 135us/step - loss: 0.1721 - acc: 0.7549 - val_loss: 0.1589 - val_acc: 0.8708
Epoch 224/900
820/820 [==============================] - 0s 118us/step - loss: 0.1719 - acc: 0.7561 - val_loss: 0.1652 - val_acc: 0.8483
Epoch 225/900
820/820 [==============================] - 0s 116us/step - loss: 0.1720 - acc: 0.7585 - val_loss: 0.1701 - val_acc: 0.8427
Epoch 226/900
820/820 [==============================] - 0s 118us/step - loss: 0.1718 - acc: 0.7585 - val_loss: 0.1649 - val_acc: 0.8596
Epoch 227/900
820/820 [==============================] - 0s 119us/step - loss: 0.1720 - acc: 0.7561 - val_loss: 0.1613 - val_acc: 0.8596
Epoch 228/900
820/820 [==============================] - 0s 133us/step - loss: 0.1717 - acc: 0.7610 - val_loss: 0.1656 - val_acc: 0.8539
Epoch 229/900
820/820 [==============================] - 0s 137us/step - loss: 0.1719 - acc: 0.7573 - val_loss: 0.1704 - val_acc: 0.8427
Epoch 230/900
820/820 [==============================] - 0s 131us/step - loss: 0.1728 - acc: 0.7537 - val_loss: 0.1671 - val_acc: 0.8427
Epoch 231/900
820/820 [==============================] - 0s 143us/step - loss: 0.1717 - acc: 0.7573 - val_loss: 0.1567 - val_acc: 0.8708
Epoch 232/900
820/820 [==============================] - 0s 134us/step - loss: 0.1719 - acc: 0.7549 - val_loss: 0.1626 - val_acc: 0.8596
Epoch 233/900
820/820 [==============================] - 0s 132us/step - loss: 0.1721 - acc: 0.7524 - val_loss: 0.1724 - val_acc: 0.8483
Epoch 234/900
820/820 [==============================] - 0s 159us/step - loss: 0.1720 - acc: 0.7537 - val_loss: 0.1635 - val_acc: 0.8539
Epoch 235/900
820/820 [==============================] - 0s 156us/step - loss: 0.1716 - acc: 0.7573 - val_loss: 0.1702 - val_acc: 0.8483
Epoch 236/900
820/820 [==============================] - 0s 174us/step - loss: 0.1719 - acc: 0.7561 - val_loss: 0.1711 - val_acc: 0.8483
Epoch 237/900
820/820 [==============================] - 0s 163us/step - loss: 0.1719 - acc: 0.7561 - val_loss: 0.1603 - val_acc: 0.8652
Epoch 238/900
820/820 [==============================] - 0s 143us/step - loss: 0.1718 - acc: 0.7573 - val_loss: 0.1614 - val_acc: 0.8596
Epoch 239/900
820/820 [==============================] - 0s 144us/step - loss: 0.1715 - acc: 0.7598 - val_loss: 0.1685 - val_acc: 0.8427
Epoch 240/900
820/820 [==============================] - 0s 152us/step - loss: 0.1721 - acc: 0.7549 - val_loss: 0.1708 - val_acc: 0.8427
Epoch 241/900
820/820 [==============================] - 0s 144us/step - loss: 0.1717 - acc: 0.7573 - val_loss: 0.1618 - val_acc: 0.8652
Epoch 242/900
820/820 [==============================] - 0s 152us/step - loss: 0.1715 - acc: 0.7598 - val_loss: 0.1663 - val_acc: 0.8483
Epoch 243/900
820/820 [==============================] - 0s 134us/step - loss: 0.1716 - acc: 0.7573 - val_loss: 0.1676 - val_acc: 0.8427
Epoch 244/900
820/820 [==============================] - 0s 141us/step - loss: 0.1717 - acc: 0.7573 - val_loss: 0.1630 - val_acc: 0.8539
Epoch 245/900
820/820 [==============================] - 0s 133us/step - loss: 0.1714 - acc: 0.7585 - val_loss: 0.1649 - val_acc: 0.8539
Epoch 246/900
820/820 [==============================] - 0s 126us/step - loss: 0.1714 - acc: 0.7585 - val_loss: 0.1618 - val_acc: 0.8596
Epoch 247/900
820/820 [==============================] - 0s 135us/step - loss: 0.1716 - acc: 0.7573 - val_loss: 0.1575 - val_acc: 0.8596
Epoch 248/900
820/820 [==============================] - 0s 146us/step - loss: 0.1714 - acc: 0.7561 - val_loss: 0.1672 - val_acc: 0.8427
Epoch 249/900
820/820 [==============================] - 0s 146us/step - loss: 0.1714 - acc: 0.7585 - val_loss: 0.1626 - val_acc: 0.8596
Epoch 250/900
820/820 [==============================] - 0s 149us/step - loss: 0.1714 - acc: 0.7598 - val_loss: 0.1560 - val_acc: 0.8652
Epoch 251/900
820/820 [==============================] - 0s 133us/step - loss: 0.1717 - acc: 0.7585 - val_loss: 0.1636 - val_acc: 0.8539
Epoch 252/900
820/820 [==============================] - 0s 135us/step - loss: 0.1713 - acc: 0.7598 - val_loss: 0.1614 - val_acc: 0.8596
Epoch 253/900
820/820 [==============================] - 0s 122us/step - loss: 0.1713 - acc: 0.7598 - val_loss: 0.1652 - val_acc: 0.8539
Epoch 254/900
820/820 [==============================] - 0s 104us/step - loss: 0.1715 - acc: 0.7573 - val_loss: 0.1661 - val_acc: 0.8539
Epoch 255/900
820/820 [==============================] - 0s 103us/step - loss: 0.1713 - acc: 0.7598 - val_loss: 0.1617 - val_acc: 0.8652
Epoch 256/900
820/820 [==============================] - 0s 104us/step - loss: 0.1712 - acc: 0.7598 - val_loss: 0.1655 - val_acc: 0.8483
Epoch 257/900
820/820 [==============================] - 0s 148us/step - loss: 0.1713 - acc: 0.7598 - val_loss: 0.1622 - val_acc: 0.8539
Epoch 258/900
820/820 [==============================] - 0s 159us/step - loss: 0.1712 - acc: 0.7598 - val_loss: 0.1645 - val_acc: 0.8539
Epoch 259/900
820/820 [==============================] - 0s 127us/step - loss: 0.1713 - acc: 0.7585 - val_loss: 0.1707 - val_acc: 0.8427
Epoch 260/900
820/820 [==============================] - 0s 111us/step - loss: 0.1715 - acc: 0.7561 - val_loss: 0.1707 - val_acc: 0.8427
Epoch 261/900
820/820 [==============================] - 0s 117us/step - loss: 0.1714 - acc: 0.7585 - val_loss: 0.1639 - val_acc: 0.8539
Epoch 262/900
820/820 [==============================] - 0s 110us/step - loss: 0.1711 - acc: 0.7598 - val_loss: 0.1635 - val_acc: 0.8539
Epoch 263/900
820/820 [==============================] - 0s 124us/step - loss: 0.1711 - acc: 0.7610 - val_loss: 0.1630 - val_acc: 0.8596
Epoch 264/900
820/820 [==============================] - 0s 115us/step - loss: 0.1712 - acc: 0.7610 - val_loss: 0.1606 - val_acc: 0.8652
Epoch 265/900
820/820 [==============================] - 0s 132us/step - loss: 0.1712 - acc: 0.7573 - val_loss: 0.1588 - val_acc: 0.8652
Epoch 266/900
820/820 [==============================] - 0s 131us/step - loss: 0.1711 - acc: 0.7610 - val_loss: 0.1622 - val_acc: 0.8539
Epoch 267/900
820/820 [==============================] - 0s 105us/step - loss: 0.1712 - acc: 0.7598 - val_loss: 0.1627 - val_acc: 0.8539
Epoch 268/900
820/820 [==============================] - 0s 107us/step - loss: 0.1710 - acc: 0.7598 - val_loss: 0.1582 - val_acc: 0.8652
Epoch 269/900
820/820 [==============================] - 0s 101us/step - loss: 0.1711 - acc: 0.7561 - val_loss: 0.1615 - val_acc: 0.8596
Epoch 270/900
820/820 [==============================] - 0s 101us/step - loss: 0.1711 - acc: 0.7598 - val_loss: 0.1640 - val_acc: 0.8539
Epoch 271/900
820/820 [==============================] - 0s 110us/step - loss: 0.1713 - acc: 0.7573 - val_loss: 0.1561 - val_acc: 0.8596
Epoch 272/900
820/820 [==============================] - 0s 113us/step - loss: 0.1715 - acc: 0.7573 - val_loss: 0.1553 - val_acc: 0.8596
Epoch 273/900
820/820 [==============================] - 0s 107us/step - loss: 0.1712 - acc: 0.7573 - val_loss: 0.1601 - val_acc: 0.8652
Epoch 274/900
820/820 [==============================] - 0s 110us/step - loss: 0.1710 - acc: 0.7573 - val_loss: 0.1620 - val_acc: 0.8596
Epoch 275/900
820/820 [==============================] - 0s 116us/step - loss: 0.1710 - acc: 0.7610 - val_loss: 0.1698 - val_acc: 0.8483
Epoch 276/900
820/820 [==============================] - 0s 105us/step - loss: 0.1717 - acc: 0.7549 - val_loss: 0.1652 - val_acc: 0.8596
Epoch 277/900
820/820 [==============================] - 0s 106us/step - loss: 0.1720 - acc: 0.7598 - val_loss: 0.1506 - val_acc: 0.8708
Epoch 278/900
820/820 [==============================] - 0s 111us/step - loss: 0.1715 - acc: 0.7573 - val_loss: 0.1608 - val_acc: 0.8596
Epoch 279/900
820/820 [==============================] - 0s 102us/step - loss: 0.1712 - acc: 0.7573 - val_loss: 0.1693 - val_acc: 0.8483
Epoch 280/900
820/820 [==============================] - 0s 103us/step - loss: 0.1709 - acc: 0.7561 - val_loss: 0.1569 - val_acc: 0.8652
Epoch 281/900
820/820 [==============================] - 0s 117us/step - loss: 0.1713 - acc: 0.7549 - val_loss: 0.1576 - val_acc: 0.8652
Epoch 282/900
820/820 [==============================] - 0s 122us/step - loss: 0.1709 - acc: 0.7573 - val_loss: 0.1607 - val_acc: 0.8652
Epoch 283/900
820/820 [==============================] - 0s 159us/step - loss: 0.1715 - acc: 0.7537 - val_loss: 0.1684 - val_acc: 0.8483
Epoch 284/900
820/820 [==============================] - 0s 160us/step - loss: 0.1709 - acc: 0.7610 - val_loss: 0.1604 - val_acc: 0.8652
Epoch 285/900
820/820 [==============================] - 0s 211us/step - loss: 0.1710 - acc: 0.7585 - val_loss: 0.1636 - val_acc: 0.8596
Epoch 286/900
820/820 [==============================] - 0s 181us/step - loss: 0.1708 - acc: 0.7610 - val_loss: 0.1635 - val_acc: 0.8652
Epoch 287/900
820/820 [==============================] - 0s 159us/step - loss: 0.1708 - acc: 0.7573 - val_loss: 0.1659 - val_acc: 0.8539
Epoch 288/900
820/820 [==============================] - 0s 147us/step - loss: 0.1708 - acc: 0.7585 - val_loss: 0.1600 - val_acc: 0.8652
Epoch 289/900
820/820 [==============================] - 0s 170us/step - loss: 0.1716 - acc: 0.7561 - val_loss: 0.1503 - val_acc: 0.8708
Epoch 290/900
820/820 [==============================] - 0s 169us/step - loss: 0.1712 - acc: 0.7537 - val_loss: 0.1724 - val_acc: 0.8483
Epoch 291/900
820/820 [==============================] - 0s 158us/step - loss: 0.1714 - acc: 0.7561 - val_loss: 0.1611 - val_acc: 0.8596
Epoch 292/900
820/820 [==============================] - 0s 170us/step - loss: 0.1708 - acc: 0.7585 - val_loss: 0.1657 - val_acc: 0.8596
Epoch 293/900
820/820 [==============================] - 0s 154us/step - loss: 0.1712 - acc: 0.7537 - val_loss: 0.1671 - val_acc: 0.8539
Epoch 294/900
820/820 [==============================] - 0s 123us/step - loss: 0.1707 - acc: 0.7585 - val_loss: 0.1585 - val_acc: 0.8652
Epoch 295/900
820/820 [==============================] - 0s 130us/step - loss: 0.1708 - acc: 0.7585 - val_loss: 0.1566 - val_acc: 0.8652
Epoch 296/900
820/820 [==============================] - 0s 138us/step - loss: 0.1707 - acc: 0.7573 - val_loss: 0.1648 - val_acc: 0.8539
Epoch 297/900
820/820 [==============================] - 0s 159us/step - loss: 0.1712 - acc: 0.7549 - val_loss: 0.1689 - val_acc: 0.8483
Epoch 298/900
820/820 [==============================] - 0s 175us/step - loss: 0.1707 - acc: 0.7585 - val_loss: 0.1612 - val_acc: 0.8652
Epoch 299/900
820/820 [==============================] - 0s 159us/step - loss: 0.1706 - acc: 0.7573 - val_loss: 0.1568 - val_acc: 0.8652
Epoch 300/900
820/820 [==============================] - 0s 145us/step - loss: 0.1709 - acc: 0.7585 - val_loss: 0.1579 - val_acc: 0.8652
Epoch 301/900
820/820 [==============================] - 0s 241us/step - loss: 0.1708 - acc: 0.7610 - val_loss: 0.1710 - val_acc: 0.8483
Epoch 302/900
820/820 [==============================] - 0s 110us/step - loss: 0.1710 - acc: 0.7561 - val_loss: 0.1677 - val_acc: 0.8483
Epoch 303/900
820/820 [==============================] - 0s 101us/step - loss: 0.1707 - acc: 0.7573 - val_loss: 0.1621 - val_acc: 0.8596
Epoch 304/900
820/820 [==============================] - 0s 100us/step - loss: 0.1706 - acc: 0.7598 - val_loss: 0.1585 - val_acc: 0.8652
Epoch 305/900
820/820 [==============================] - 0s 98us/step - loss: 0.1708 - acc: 0.7585 - val_loss: 0.1536 - val_acc: 0.8596
Epoch 306/900
820/820 [==============================] - 0s 104us/step - loss: 0.1707 - acc: 0.7561 - val_loss: 0.1627 - val_acc: 0.8652
Epoch 307/900
820/820 [==============================] - 0s 102us/step - loss: 0.1706 - acc: 0.7598 - val_loss: 0.1655 - val_acc: 0.8596
Epoch 308/900
820/820 [==============================] - 0s 104us/step - loss: 0.1705 - acc: 0.7598 - val_loss: 0.1602 - val_acc: 0.8652
Epoch 309/900
820/820 [==============================] - 0s 108us/step - loss: 0.1705 - acc: 0.7573 - val_loss: 0.1579 - val_acc: 0.8708
Epoch 310/900
820/820 [==============================] - 0s 140us/step - loss: 0.1706 - acc: 0.7573 - val_loss: 0.1628 - val_acc: 0.8652
Epoch 311/900
820/820 [==============================] - 0s 167us/step - loss: 0.1705 - acc: 0.7610 - val_loss: 0.1653 - val_acc: 0.8596
Epoch 312/900
820/820 [==============================] - 0s 139us/step - loss: 0.1705 - acc: 0.7610 - val_loss: 0.1621 - val_acc: 0.8652
Epoch 313/900
820/820 [==============================] - 0s 107us/step - loss: 0.1705 - acc: 0.7561 - val_loss: 0.1576 - val_acc: 0.8652
Epoch 314/900
820/820 [==============================] - 0s 101us/step - loss: 0.1705 - acc: 0.7585 - val_loss: 0.1612 - val_acc: 0.8596
Epoch 315/900
820/820 [==============================] - 0s 176us/step - loss: 0.1706 - acc: 0.7573 - val_loss: 0.1556 - val_acc: 0.8596
Epoch 316/900
820/820 [==============================] - 0s 105us/step - loss: 0.1709 - acc: 0.7598 - val_loss: 0.1561 - val_acc: 0.8596
Epoch 317/900
820/820 [==============================] - 0s 101us/step - loss: 0.1707 - acc: 0.7561 - val_loss: 0.1580 - val_acc: 0.8596
Epoch 318/900
820/820 [==============================] - 0s 99us/step - loss: 0.1707 - acc: 0.7585 - val_loss: 0.1560 - val_acc: 0.8596
Epoch 319/900
820/820 [==============================] - 0s 94us/step - loss: 0.1704 - acc: 0.7585 - val_loss: 0.1616 - val_acc: 0.8596
Epoch 320/900
820/820 [==============================] - 0s 102us/step - loss: 0.1705 - acc: 0.7598 - val_loss: 0.1582 - val_acc: 0.8652
Epoch 321/900
820/820 [==============================] - 0s 97us/step - loss: 0.1707 - acc: 0.7573 - val_loss: 0.1648 - val_acc: 0.8596
Epoch 322/900
820/820 [==============================] - 0s 94us/step - loss: 0.1710 - acc: 0.7598 - val_loss: 0.1554 - val_acc: 0.8596
Epoch 323/900
820/820 [==============================] - 0s 100us/step - loss: 0.1704 - acc: 0.7585 - val_loss: 0.1676 - val_acc: 0.8483
Epoch 324/900
820/820 [==============================] - 0s 103us/step - loss: 0.1704 - acc: 0.7598 - val_loss: 0.1612 - val_acc: 0.8652
Epoch 325/900
820/820 [==============================] - 0s 105us/step - loss: 0.1704 - acc: 0.7598 - val_loss: 0.1645 - val_acc: 0.8652
Epoch 326/900
820/820 [==============================] - 0s 106us/step - loss: 0.1705 - acc: 0.7585 - val_loss: 0.1651 - val_acc: 0.8596
Epoch 327/900
820/820 [==============================] - 0s 100us/step - loss: 0.1703 - acc: 0.7598 - val_loss: 0.1642 - val_acc: 0.8652
Epoch 328/900
820/820 [==============================] - 0s 100us/step - loss: 0.1703 - acc: 0.7610 - val_loss: 0.1620 - val_acc: 0.8652
Epoch 329/900
820/820 [==============================] - 0s 106us/step - loss: 0.1703 - acc: 0.7610 - val_loss: 0.1598 - val_acc: 0.8652
Epoch 330/900
820/820 [==============================] - 0s 98us/step - loss: 0.1703 - acc: 0.7598 - val_loss: 0.1621 - val_acc: 0.8652
Epoch 331/900
820/820 [==============================] - 0s 101us/step - loss: 0.1707 - acc: 0.7585 - val_loss: 0.1654 - val_acc: 0.8596
Epoch 332/900
820/820 [==============================] - 0s 105us/step - loss: 0.1708 - acc: 0.7585 - val_loss: 0.1516 - val_acc: 0.8708
Epoch 333/900
820/820 [==============================] - 0s 98us/step - loss: 0.1707 - acc: 0.7610 - val_loss: 0.1565 - val_acc: 0.8652
Epoch 334/900
820/820 [==============================] - 0s 97us/step - loss: 0.1702 - acc: 0.7585 - val_loss: 0.1622 - val_acc: 0.8652
Epoch 335/900
820/820 [==============================] - 0s 100us/step - loss: 0.1702 - acc: 0.7610 - val_loss: 0.1645 - val_acc: 0.8596
Epoch 336/900
820/820 [==============================] - 0s 98us/step - loss: 0.1705 - acc: 0.7622 - val_loss: 0.1654 - val_acc: 0.8596
Epoch 337/900
820/820 [==============================] - 0s 99us/step - loss: 0.1702 - acc: 0.7610 - val_loss: 0.1643 - val_acc: 0.8652
Epoch 338/900
820/820 [==============================] - 0s 102us/step - loss: 0.1703 - acc: 0.7598 - val_loss: 0.1564 - val_acc: 0.8708
Epoch 339/900
820/820 [==============================] - 0s 97us/step - loss: 0.1705 - acc: 0.7598 - val_loss: 0.1621 - val_acc: 0.8652
Epoch 340/900
820/820 [==============================] - 0s 106us/step - loss: 0.1704 - acc: 0.7598 - val_loss: 0.1599 - val_acc: 0.8652
Epoch 341/900
820/820 [==============================] - 0s 99us/step - loss: 0.1701 - acc: 0.7598 - val_loss: 0.1661 - val_acc: 0.8596
Epoch 342/900
820/820 [==============================] - 0s 98us/step - loss: 0.1702 - acc: 0.7610 - val_loss: 0.1657 - val_acc: 0.8596
Epoch 343/900
820/820 [==============================] - 0s 104us/step - loss: 0.1699 - acc: 0.7610 - val_loss: 0.1575 - val_acc: 0.8708
Epoch 344/900
820/820 [==============================] - 0s 130us/step - loss: 0.1703 - acc: 0.7585 - val_loss: 0.1574 - val_acc: 0.8708
Epoch 345/900
820/820 [==============================] - 0s 113us/step - loss: 0.1701 - acc: 0.7598 - val_loss: 0.1701 - val_acc: 0.8539
Epoch 346/900
820/820 [==============================] - 0s 100us/step - loss: 0.1702 - acc: 0.7573 - val_loss: 0.1624 - val_acc: 0.8652
Epoch 347/900
820/820 [==============================] - 0s 96us/step - loss: 0.1703 - acc: 0.7573 - val_loss: 0.1520 - val_acc: 0.8708
Epoch 348/900
820/820 [==============================] - 0s 100us/step - loss: 0.1704 - acc: 0.7598 - val_loss: 0.1588 - val_acc: 0.8652
Epoch 349/900
820/820 [==============================] - 0s 97us/step - loss: 0.1700 - acc: 0.7610 - val_loss: 0.1642 - val_acc: 0.8652
Epoch 350/900
820/820 [==============================] - 0s 101us/step - loss: 0.1700 - acc: 0.7610 - val_loss: 0.1609 - val_acc: 0.8652
Epoch 351/900
820/820 [==============================] - 0s 100us/step - loss: 0.1703 - acc: 0.7585 - val_loss: 0.1606 - val_acc: 0.8652
Epoch 352/900
820/820 [==============================] - 0s 101us/step - loss: 0.1703 - acc: 0.7634 - val_loss: 0.1523 - val_acc: 0.8708
Epoch 353/900
820/820 [==============================] - 0s 99us/step - loss: 0.1702 - acc: 0.7585 - val_loss: 0.1613 - val_acc: 0.8652
Epoch 354/900
820/820 [==============================] - 0s 101us/step - loss: 0.1700 - acc: 0.7610 - val_loss: 0.1678 - val_acc: 0.8539
Epoch 355/900
820/820 [==============================] - 0s 100us/step - loss: 0.1702 - acc: 0.7561 - val_loss: 0.1664 - val_acc: 0.8596
Epoch 356/900
820/820 [==============================] - 0s 102us/step - loss: 0.1702 - acc: 0.7610 - val_loss: 0.1585 - val_acc: 0.8652
Epoch 357/900
820/820 [==============================] - 0s 99us/step - loss: 0.1703 - acc: 0.7573 - val_loss: 0.1575 - val_acc: 0.8708
Epoch 358/900
820/820 [==============================] - 0s 100us/step - loss: 0.1698 - acc: 0.7573 - val_loss: 0.1661 - val_acc: 0.8652
Epoch 359/900
820/820 [==============================] - 0s 101us/step - loss: 0.1700 - acc: 0.7598 - val_loss: 0.1612 - val_acc: 0.8652
Epoch 360/900
820/820 [==============================] - 0s 97us/step - loss: 0.1701 - acc: 0.7585 - val_loss: 0.1540 - val_acc: 0.8764
Epoch 361/900
820/820 [==============================] - 0s 97us/step - loss: 0.1698 - acc: 0.7585 - val_loss: 0.1643 - val_acc: 0.8652
Epoch 362/900
820/820 [==============================] - 0s 106us/step - loss: 0.1704 - acc: 0.7549 - val_loss: 0.1736 - val_acc: 0.8483
Epoch 363/900
820/820 [==============================] - 0s 104us/step - loss: 0.1704 - acc: 0.7585 - val_loss: 0.1561 - val_acc: 0.8708
Epoch 364/900
820/820 [==============================] - 0s 108us/step - loss: 0.1700 - acc: 0.7585 - val_loss: 0.1596 - val_acc: 0.8652
Epoch 365/900
820/820 [==============================] - 0s 97us/step - loss: 0.1699 - acc: 0.7598 - val_loss: 0.1612 - val_acc: 0.8652
Epoch 366/900
820/820 [==============================] - 0s 101us/step - loss: 0.1698 - acc: 0.7585 - val_loss: 0.1600 - val_acc: 0.8652
Epoch 367/900
820/820 [==============================] - 0s 102us/step - loss: 0.1697 - acc: 0.7598 - val_loss: 0.1651 - val_acc: 0.8652
Epoch 368/900
820/820 [==============================] - 0s 100us/step - loss: 0.1699 - acc: 0.7598 - val_loss: 0.1695 - val_acc: 0.8596
Epoch 369/900
820/820 [==============================] - 0s 100us/step - loss: 0.1700 - acc: 0.7598 - val_loss: 0.1640 - val_acc: 0.8652
Epoch 370/900
820/820 [==============================] - 0s 96us/step - loss: 0.1699 - acc: 0.7573 - val_loss: 0.1601 - val_acc: 0.8652
Epoch 371/900
820/820 [==============================] - 0s 96us/step - loss: 0.1698 - acc: 0.7585 - val_loss: 0.1600 - val_acc: 0.8652
Epoch 372/900
820/820 [==============================] - 0s 101us/step - loss: 0.1701 - acc: 0.7561 - val_loss: 0.1554 - val_acc: 0.8708
Epoch 373/900
820/820 [==============================] - 0s 102us/step - loss: 0.1702 - acc: 0.7598 - val_loss: 0.1650 - val_acc: 0.8652
Epoch 374/900
820/820 [==============================] - 0s 138us/step - loss: 0.1697 - acc: 0.7610 - val_loss: 0.1575 - val_acc: 0.8708
Epoch 375/900
820/820 [==============================] - 0s 134us/step - loss: 0.1697 - acc: 0.7598 - val_loss: 0.1626 - val_acc: 0.8652
Epoch 376/900
820/820 [==============================] - 0s 147us/step - loss: 0.1699 - acc: 0.7598 - val_loss: 0.1579 - val_acc: 0.8652
Epoch 377/900
820/820 [==============================] - 0s 141us/step - loss: 0.1697 - acc: 0.7598 - val_loss: 0.1612 - val_acc: 0.8652
Epoch 378/900
820/820 [==============================] - 0s 103us/step - loss: 0.1696 - acc: 0.7598 - val_loss: 0.1618 - val_acc: 0.8652
Epoch 379/900
820/820 [==============================] - 0s 175us/step - loss: 0.1697 - acc: 0.7610 - val_loss: 0.1588 - val_acc: 0.8652
Epoch 380/900
820/820 [==============================] - 0s 112us/step - loss: 0.1697 - acc: 0.7622 - val_loss: 0.1669 - val_acc: 0.8596
Epoch 381/900
820/820 [==============================] - 0s 132us/step - loss: 0.1698 - acc: 0.7634 - val_loss: 0.1638 - val_acc: 0.8652
Epoch 382/900
820/820 [==============================] - 0s 137us/step - loss: 0.1696 - acc: 0.7598 - val_loss: 0.1638 - val_acc: 0.8652
Epoch 383/900
820/820 [==============================] - 0s 113us/step - loss: 0.1698 - acc: 0.7598 - val_loss: 0.1627 - val_acc: 0.8652
Epoch 384/900
820/820 [==============================] - 0s 188us/step - loss: 0.1696 - acc: 0.7610 - val_loss: 0.1626 - val_acc: 0.8652
Epoch 385/900
820/820 [==============================] - 0s 141us/step - loss: 0.1698 - acc: 0.7585 - val_loss: 0.1597 - val_acc: 0.8708
Epoch 386/900
820/820 [==============================] - 0s 111us/step - loss: 0.1698 - acc: 0.7610 - val_loss: 0.1628 - val_acc: 0.8652
Epoch 387/900
820/820 [==============================] - 0s 112us/step - loss: 0.1695 - acc: 0.7622 - val_loss: 0.1652 - val_acc: 0.8596
Epoch 388/900
820/820 [==============================] - 0s 122us/step - loss: 0.1696 - acc: 0.7622 - val_loss: 0.1602 - val_acc: 0.8708
Epoch 389/900
820/820 [==============================] - 0s 132us/step - loss: 0.1696 - acc: 0.7610 - val_loss: 0.1628 - val_acc: 0.8652
Epoch 390/900
820/820 [==============================] - 0s 138us/step - loss: 0.1695 - acc: 0.7598 - val_loss: 0.1653 - val_acc: 0.8596
Epoch 391/900
820/820 [==============================] - 0s 109us/step - loss: 0.1696 - acc: 0.7622 - val_loss: 0.1628 - val_acc: 0.8652
Epoch 392/900
820/820 [==============================] - 0s 103us/step - loss: 0.1695 - acc: 0.7622 - val_loss: 0.1567 - val_acc: 0.8708
Epoch 393/900
820/820 [==============================] - 0s 108us/step - loss: 0.1698 - acc: 0.7573 - val_loss: 0.1579 - val_acc: 0.8708
Epoch 394/900
820/820 [==============================] - 0s 113us/step - loss: 0.1702 - acc: 0.7610 - val_loss: 0.1672 - val_acc: 0.8539
Epoch 395/900
820/820 [==============================] - 0s 106us/step - loss: 0.1693 - acc: 0.7610 - val_loss: 0.1594 - val_acc: 0.8708
Epoch 396/900
820/820 [==============================] - 0s 108us/step - loss: 0.1695 - acc: 0.7610 - val_loss: 0.1559 - val_acc: 0.8708
Epoch 397/900
820/820 [==============================] - 0s 109us/step - loss: 0.1696 - acc: 0.7598 - val_loss: 0.1598 - val_acc: 0.8708
Epoch 398/900
820/820 [==============================] - 0s 105us/step - loss: 0.1697 - acc: 0.7634 - val_loss: 0.1664 - val_acc: 0.8539
Epoch 399/900
820/820 [==============================] - 0s 108us/step - loss: 0.1695 - acc: 0.7634 - val_loss: 0.1575 - val_acc: 0.8708
Epoch 400/900
820/820 [==============================] - 0s 108us/step - loss: 0.1698 - acc: 0.7598 - val_loss: 0.1551 - val_acc: 0.8708
Epoch 401/900
820/820 [==============================] - 0s 111us/step - loss: 0.1695 - acc: 0.7610 - val_loss: 0.1585 - val_acc: 0.8708
Epoch 402/900
820/820 [==============================] - 0s 103us/step - loss: 0.1694 - acc: 0.7610 - val_loss: 0.1654 - val_acc: 0.8596
Epoch 403/900
820/820 [==============================] - 0s 105us/step - loss: 0.1695 - acc: 0.7610 - val_loss: 0.1628 - val_acc: 0.8652
Epoch 404/900
820/820 [==============================] - 0s 128us/step - loss: 0.1700 - acc: 0.7561 - val_loss: 0.1551 - val_acc: 0.8708
Epoch 405/900
820/820 [==============================] - 0s 118us/step - loss: 0.1694 - acc: 0.7598 - val_loss: 0.1615 - val_acc: 0.8708
Epoch 406/900
820/820 [==============================] - 0s 97us/step - loss: 0.1695 - acc: 0.7610 - val_loss: 0.1535 - val_acc: 0.8708
Epoch 407/900
820/820 [==============================] - 0s 108us/step - loss: 0.1696 - acc: 0.7610 - val_loss: 0.1616 - val_acc: 0.8708
Epoch 408/900
820/820 [==============================] - 0s 103us/step - loss: 0.1694 - acc: 0.7598 - val_loss: 0.1678 - val_acc: 0.8483
Epoch 409/900
820/820 [==============================] - 0s 105us/step - loss: 0.1694 - acc: 0.7598 - val_loss: 0.1610 - val_acc: 0.8708
Epoch 410/900
820/820 [==============================] - 0s 103us/step - loss: 0.1693 - acc: 0.7610 - val_loss: 0.1595 - val_acc: 0.8708
Epoch 411/900
820/820 [==============================] - 0s 146us/step - loss: 0.1696 - acc: 0.7598 - val_loss: 0.1573 - val_acc: 0.8708
Epoch 412/900
820/820 [==============================] - 0s 105us/step - loss: 0.1701 - acc: 0.7598 - val_loss: 0.1635 - val_acc: 0.8652
Epoch 413/900
820/820 [==============================] - 0s 110us/step - loss: 0.1693 - acc: 0.7610 - val_loss: 0.1556 - val_acc: 0.8708
Epoch 414/900
820/820 [==============================] - 0s 105us/step - loss: 0.1696 - acc: 0.7598 - val_loss: 0.1608 - val_acc: 0.8708
Epoch 415/900
820/820 [==============================] - 0s 132us/step - loss: 0.1693 - acc: 0.7610 - val_loss: 0.1624 - val_acc: 0.8652
Epoch 416/900
820/820 [==============================] - 0s 362us/step - loss: 0.1693 - acc: 0.7610 - val_loss: 0.1649 - val_acc: 0.8652
Epoch 417/900
820/820 [==============================] - 0s 159us/step - loss: 0.1698 - acc: 0.7659 - val_loss: 0.1641 - val_acc: 0.8652
Epoch 418/900
820/820 [==============================] - 0s 145us/step - loss: 0.1706 - acc: 0.7585 - val_loss: 0.1470 - val_acc: 0.8708
Epoch 419/900
820/820 [==============================] - 0s 142us/step - loss: 0.1703 - acc: 0.7598 - val_loss: 0.1622 - val_acc: 0.8708
Epoch 420/900
820/820 [==============================] - 0s 108us/step - loss: 0.1694 - acc: 0.7646 - val_loss: 0.1648 - val_acc: 0.8596
Epoch 421/900
820/820 [==============================] - 0s 113us/step - loss: 0.1694 - acc: 0.7610 - val_loss: 0.1578 - val_acc: 0.8708
Epoch 422/900
820/820 [==============================] - 0s 122us/step - loss: 0.1696 - acc: 0.7610 - val_loss: 0.1695 - val_acc: 0.8483
Epoch 423/900
820/820 [==============================] - 0s 136us/step - loss: 0.1693 - acc: 0.7646 - val_loss: 0.1628 - val_acc: 0.8652
Epoch 424/900
820/820 [==============================] - 0s 141us/step - loss: 0.1692 - acc: 0.7610 - val_loss: 0.1588 - val_acc: 0.8708
Epoch 425/900
820/820 [==============================] - 0s 136us/step - loss: 0.1692 - acc: 0.7598 - val_loss: 0.1539 - val_acc: 0.8764
Epoch 426/900
820/820 [==============================] - 0s 118us/step - loss: 0.1699 - acc: 0.7622 - val_loss: 0.1511 - val_acc: 0.8764
Epoch 427/900
820/820 [==============================] - 0s 114us/step - loss: 0.1695 - acc: 0.7598 - val_loss: 0.1617 - val_acc: 0.8652
Epoch 428/900
820/820 [==============================] - 0s 122us/step - loss: 0.1691 - acc: 0.7610 - val_loss: 0.1640 - val_acc: 0.8652
Epoch 429/900
820/820 [==============================] - 0s 130us/step - loss: 0.1692 - acc: 0.7634 - val_loss: 0.1624 - val_acc: 0.8652
Epoch 430/900
820/820 [==============================] - 0s 114us/step - loss: 0.1693 - acc: 0.7598 - val_loss: 0.1507 - val_acc: 0.8764
Epoch 431/900
820/820 [==============================] - 0s 122us/step - loss: 0.1697 - acc: 0.7598 - val_loss: 0.1568 - val_acc: 0.8708
Epoch 432/900
820/820 [==============================] - 0s 128us/step - loss: 0.1691 - acc: 0.7598 - val_loss: 0.1573 - val_acc: 0.8708
Epoch 433/900
820/820 [==============================] - 0s 149us/step - loss: 0.1692 - acc: 0.7610 - val_loss: 0.1631 - val_acc: 0.8652
Epoch 434/900
820/820 [==============================] - 0s 145us/step - loss: 0.1691 - acc: 0.7622 - val_loss: 0.1613 - val_acc: 0.8652
Epoch 435/900
820/820 [==============================] - 0s 126us/step - loss: 0.1692 - acc: 0.7646 - val_loss: 0.1511 - val_acc: 0.8764
Epoch 436/900
820/820 [==============================] - 0s 122us/step - loss: 0.1693 - acc: 0.7634 - val_loss: 0.1593 - val_acc: 0.8652
Epoch 437/900
820/820 [==============================] - 0s 152us/step - loss: 0.1690 - acc: 0.7634 - val_loss: 0.1673 - val_acc: 0.8539
Epoch 438/900
820/820 [==============================] - 0s 167us/step - loss: 0.1698 - acc: 0.7610 - val_loss: 0.1661 - val_acc: 0.8596
Epoch 439/900
820/820 [==============================] - 0s 154us/step - loss: 0.1693 - acc: 0.7622 - val_loss: 0.1526 - val_acc: 0.8764
Epoch 440/900
820/820 [==============================] - 0s 137us/step - loss: 0.1693 - acc: 0.7622 - val_loss: 0.1583 - val_acc: 0.8708
Epoch 441/900
820/820 [==============================] - 0s 167us/step - loss: 0.1694 - acc: 0.7610 - val_loss: 0.1661 - val_acc: 0.8539
Epoch 442/900
820/820 [==============================] - 0s 109us/step - loss: 0.1689 - acc: 0.7622 - val_loss: 0.1596 - val_acc: 0.8708
Epoch 443/900
820/820 [==============================] - 0s 110us/step - loss: 0.1690 - acc: 0.7598 - val_loss: 0.1583 - val_acc: 0.8708
Epoch 444/900
820/820 [==============================] - 0s 110us/step - loss: 0.1694 - acc: 0.7598 - val_loss: 0.1549 - val_acc: 0.8764
Epoch 445/900
820/820 [==============================] - 0s 102us/step - loss: 0.1692 - acc: 0.7585 - val_loss: 0.1588 - val_acc: 0.8708
Epoch 446/900
820/820 [==============================] - 0s 97us/step - loss: 0.1689 - acc: 0.7622 - val_loss: 0.1662 - val_acc: 0.8539
Epoch 447/900
820/820 [==============================] - 0s 99us/step - loss: 0.1694 - acc: 0.7622 - val_loss: 0.1668 - val_acc: 0.8596
Epoch 448/900
820/820 [==============================] - 0s 97us/step - loss: 0.1688 - acc: 0.7598 - val_loss: 0.1553 - val_acc: 0.8764
Epoch 449/900
820/820 [==============================] - 0s 97us/step - loss: 0.1692 - acc: 0.7598 - val_loss: 0.1601 - val_acc: 0.8652
Epoch 450/900
820/820 [==============================] - 0s 99us/step - loss: 0.1689 - acc: 0.7622 - val_loss: 0.1532 - val_acc: 0.8764
Epoch 451/900
820/820 [==============================] - 0s 102us/step - loss: 0.1692 - acc: 0.7610 - val_loss: 0.1560 - val_acc: 0.8708
Epoch 452/900
820/820 [==============================] - 0s 104us/step - loss: 0.1690 - acc: 0.7598 - val_loss: 0.1632 - val_acc: 0.8596
Epoch 453/900
820/820 [==============================] - 0s 101us/step - loss: 0.1686 - acc: 0.7634 - val_loss: 0.1565 - val_acc: 0.8708
Epoch 454/900
820/820 [==============================] - 0s 100us/step - loss: 0.1693 - acc: 0.7610 - val_loss: 0.1545 - val_acc: 0.8764
Epoch 455/900
820/820 [==============================] - 0s 97us/step - loss: 0.1693 - acc: 0.7585 - val_loss: 0.1667 - val_acc: 0.8539
Epoch 456/900
820/820 [==============================] - 0s 99us/step - loss: 0.1688 - acc: 0.7622 - val_loss: 0.1598 - val_acc: 0.8652
Epoch 457/900
820/820 [==============================] - 0s 97us/step - loss: 0.1690 - acc: 0.7622 - val_loss: 0.1597 - val_acc: 0.8652
Epoch 458/900
820/820 [==============================] - 0s 111us/step - loss: 0.1688 - acc: 0.7622 - val_loss: 0.1545 - val_acc: 0.8764
Epoch 459/900
820/820 [==============================] - 0s 101us/step - loss: 0.1693 - acc: 0.7573 - val_loss: 0.1586 - val_acc: 0.8652
Epoch 460/900
820/820 [==============================] - 0s 99us/step - loss: 0.1690 - acc: 0.7610 - val_loss: 0.1510 - val_acc: 0.8764
Epoch 461/900
820/820 [==============================] - 0s 95us/step - loss: 0.1698 - acc: 0.7610 - val_loss: 0.1518 - val_acc: 0.8764
Epoch 462/900
820/820 [==============================] - 0s 101us/step - loss: 0.1692 - acc: 0.7598 - val_loss: 0.1567 - val_acc: 0.8764
Epoch 463/900
820/820 [==============================] - 0s 103us/step - loss: 0.1689 - acc: 0.7573 - val_loss: 0.1592 - val_acc: 0.8652
Epoch 464/900
820/820 [==============================] - 0s 98us/step - loss: 0.1689 - acc: 0.7610 - val_loss: 0.1642 - val_acc: 0.8539
Epoch 465/900
820/820 [==============================] - 0s 100us/step - loss: 0.1689 - acc: 0.7585 - val_loss: 0.1564 - val_acc: 0.8708
Epoch 466/900
820/820 [==============================] - 0s 99us/step - loss: 0.1689 - acc: 0.7610 - val_loss: 0.1623 - val_acc: 0.8539
Epoch 467/900
820/820 [==============================] - 0s 97us/step - loss: 0.1685 - acc: 0.7610 - val_loss: 0.1560 - val_acc: 0.8764
Epoch 468/900
820/820 [==============================] - 0s 101us/step - loss: 0.1688 - acc: 0.7585 - val_loss: 0.1609 - val_acc: 0.8596
Epoch 469/900
820/820 [==============================] - 0s 95us/step - loss: 0.1688 - acc: 0.7610 - val_loss: 0.1605 - val_acc: 0.8652
Epoch 470/900
820/820 [==============================] - 0s 97us/step - loss: 0.1688 - acc: 0.7610 - val_loss: 0.1655 - val_acc: 0.8483
Epoch 471/900
820/820 [==============================] - 0s 96us/step - loss: 0.1688 - acc: 0.7646 - val_loss: 0.1615 - val_acc: 0.8596
Epoch 472/900
820/820 [==============================] - 0s 95us/step - loss: 0.1687 - acc: 0.7610 - val_loss: 0.1561 - val_acc: 0.8764
Epoch 473/900
820/820 [==============================] - 0s 97us/step - loss: 0.1687 - acc: 0.7585 - val_loss: 0.1594 - val_acc: 0.8652
Epoch 474/900
820/820 [==============================] - 0s 99us/step - loss: 0.1686 - acc: 0.7646 - val_loss: 0.1657 - val_acc: 0.8483
Epoch 475/900
820/820 [==============================] - 0s 106us/step - loss: 0.1686 - acc: 0.7622 - val_loss: 0.1593 - val_acc: 0.8652
Epoch 476/900
820/820 [==============================] - 0s 105us/step - loss: 0.1687 - acc: 0.7634 - val_loss: 0.1595 - val_acc: 0.8652
Epoch 477/900
820/820 [==============================] - 0s 97us/step - loss: 0.1692 - acc: 0.7622 - val_loss: 0.1537 - val_acc: 0.8764
Epoch 478/900
820/820 [==============================] - 0s 100us/step - loss: 0.1688 - acc: 0.7610 - val_loss: 0.1600 - val_acc: 0.8596
Epoch 479/900
820/820 [==============================] - 0s 96us/step - loss: 0.1686 - acc: 0.7622 - val_loss: 0.1644 - val_acc: 0.8596
Epoch 480/900
820/820 [==============================] - 0s 100us/step - loss: 0.1687 - acc: 0.7622 - val_loss: 0.1630 - val_acc: 0.8596
Epoch 481/900
820/820 [==============================] - 0s 95us/step - loss: 0.1687 - acc: 0.7622 - val_loss: 0.1537 - val_acc: 0.8764
Epoch 482/900
820/820 [==============================] - 0s 98us/step - loss: 0.1688 - acc: 0.7598 - val_loss: 0.1577 - val_acc: 0.8596
Epoch 483/900
820/820 [==============================] - 0s 96us/step - loss: 0.1685 - acc: 0.7610 - val_loss: 0.1619 - val_acc: 0.8596
Epoch 484/900
820/820 [==============================] - 0s 98us/step - loss: 0.1686 - acc: 0.7622 - val_loss: 0.1643 - val_acc: 0.8539
Epoch 485/900
820/820 [==============================] - 0s 97us/step - loss: 0.1686 - acc: 0.7622 - val_loss: 0.1632 - val_acc: 0.8539
Epoch 486/900
820/820 [==============================] - 0s 96us/step - loss: 0.1684 - acc: 0.7622 - val_loss: 0.1562 - val_acc: 0.8708
Epoch 487/900
820/820 [==============================] - 0s 98us/step - loss: 0.1688 - acc: 0.7598 - val_loss: 0.1574 - val_acc: 0.8596
Epoch 488/900
820/820 [==============================] - 0s 102us/step - loss: 0.1685 - acc: 0.7610 - val_loss: 0.1631 - val_acc: 0.8596
Epoch 489/900
820/820 [==============================] - 0s 97us/step - loss: 0.1686 - acc: 0.7622 - val_loss: 0.1540 - val_acc: 0.8764
Epoch 490/900
820/820 [==============================] - 0s 98us/step - loss: 0.1687 - acc: 0.7610 - val_loss: 0.1584 - val_acc: 0.8596
Epoch 491/900
820/820 [==============================] - 0s 97us/step - loss: 0.1684 - acc: 0.7622 - val_loss: 0.1609 - val_acc: 0.8596
Epoch 492/900
820/820 [==============================] - 0s 100us/step - loss: 0.1684 - acc: 0.7634 - val_loss: 0.1631 - val_acc: 0.8596
Epoch 493/900
820/820 [==============================] - 0s 100us/step - loss: 0.1683 - acc: 0.7634 - val_loss: 0.1573 - val_acc: 0.8652
Epoch 494/900
820/820 [==============================] - 0s 96us/step - loss: 0.1689 - acc: 0.7622 - val_loss: 0.1517 - val_acc: 0.8764
Epoch 495/900
820/820 [==============================] - 0s 97us/step - loss: 0.1689 - acc: 0.7610 - val_loss: 0.1584 - val_acc: 0.8596
Epoch 496/900
820/820 [==============================] - 0s 98us/step - loss: 0.1684 - acc: 0.7646 - val_loss: 0.1641 - val_acc: 0.8596
Epoch 497/900
820/820 [==============================] - 0s 99us/step - loss: 0.1685 - acc: 0.7671 - val_loss: 0.1644 - val_acc: 0.8539
Epoch 498/900
820/820 [==============================] - 0s 96us/step - loss: 0.1686 - acc: 0.7646 - val_loss: 0.1636 - val_acc: 0.8539
Epoch 499/900
820/820 [==============================] - 0s 101us/step - loss: 0.1684 - acc: 0.7646 - val_loss: 0.1576 - val_acc: 0.8652
Epoch 500/900
820/820 [==============================] - 0s 105us/step - loss: 0.1685 - acc: 0.7634 - val_loss: 0.1629 - val_acc: 0.8539
Epoch 501/900
820/820 [==============================] - 0s 95us/step - loss: 0.1685 - acc: 0.7646 - val_loss: 0.1601 - val_acc: 0.8539
Epoch 502/900
820/820 [==============================] - 0s 100us/step - loss: 0.1682 - acc: 0.7622 - val_loss: 0.1568 - val_acc: 0.8652
Epoch 503/900
820/820 [==============================] - 0s 100us/step - loss: 0.1684 - acc: 0.7622 - val_loss: 0.1581 - val_acc: 0.8652
Epoch 504/900
820/820 [==============================] - 0s 98us/step - loss: 0.1683 - acc: 0.7634 - val_loss: 0.1632 - val_acc: 0.8539
Epoch 505/900
820/820 [==============================] - 0s 98us/step - loss: 0.1683 - acc: 0.7634 - val_loss: 0.1596 - val_acc: 0.8596
Epoch 506/900
820/820 [==============================] - 0s 96us/step - loss: 0.1685 - acc: 0.7585 - val_loss: 0.1546 - val_acc: 0.8764
Epoch 507/900
820/820 [==============================] - 0s 94us/step - loss: 0.1682 - acc: 0.7622 - val_loss: 0.1628 - val_acc: 0.8596
Epoch 508/900
820/820 [==============================] - 0s 97us/step - loss: 0.1684 - acc: 0.7659 - val_loss: 0.1632 - val_acc: 0.8596
Epoch 509/900
820/820 [==============================] - 0s 96us/step - loss: 0.1682 - acc: 0.7622 - val_loss: 0.1568 - val_acc: 0.8596
Epoch 510/900
820/820 [==============================] - 0s 97us/step - loss: 0.1683 - acc: 0.7610 - val_loss: 0.1623 - val_acc: 0.8596
Epoch 511/900
820/820 [==============================] - 0s 98us/step - loss: 0.1684 - acc: 0.7646 - val_loss: 0.1608 - val_acc: 0.8596
Epoch 512/900
820/820 [==============================] - 0s 100us/step - loss: 0.1685 - acc: 0.7634 - val_loss: 0.1579 - val_acc: 0.8596
Epoch 513/900
820/820 [==============================] - 0s 98us/step - loss: 0.1682 - acc: 0.7622 - val_loss: 0.1649 - val_acc: 0.8539
Epoch 514/900
820/820 [==============================] - 0s 100us/step - loss: 0.1681 - acc: 0.7646 - val_loss: 0.1583 - val_acc: 0.8596
Epoch 515/900
820/820 [==============================] - 0s 103us/step - loss: 0.1681 - acc: 0.7622 - val_loss: 0.1559 - val_acc: 0.8652
Epoch 516/900
820/820 [==============================] - 0s 101us/step - loss: 0.1684 - acc: 0.7622 - val_loss: 0.1548 - val_acc: 0.8708
Epoch 517/900
820/820 [==============================] - 0s 107us/step - loss: 0.1684 - acc: 0.7610 - val_loss: 0.1573 - val_acc: 0.8652
Epoch 518/900
820/820 [==============================] - 0s 109us/step - loss: 0.1682 - acc: 0.7598 - val_loss: 0.1629 - val_acc: 0.8539
Epoch 519/900
820/820 [==============================] - 0s 104us/step - loss: 0.1683 - acc: 0.7659 - val_loss: 0.1651 - val_acc: 0.8539
Epoch 520/900
820/820 [==============================] - 0s 97us/step - loss: 0.1681 - acc: 0.7659 - val_loss: 0.1589 - val_acc: 0.8596
Epoch 521/900
820/820 [==============================] - 0s 100us/step - loss: 0.1684 - acc: 0.7610 - val_loss: 0.1520 - val_acc: 0.8764
Epoch 522/900
820/820 [==============================] - 0s 100us/step - loss: 0.1682 - acc: 0.7622 - val_loss: 0.1603 - val_acc: 0.8596
Epoch 523/900
820/820 [==============================] - 0s 99us/step - loss: 0.1680 - acc: 0.7634 - val_loss: 0.1675 - val_acc: 0.8483
Epoch 524/900
820/820 [==============================] - 0s 224us/step - loss: 0.1683 - acc: 0.7646 - val_loss: 0.1628 - val_acc: 0.8539
Epoch 525/900
820/820 [==============================] - 0s 115us/step - loss: 0.1684 - acc: 0.7622 - val_loss: 0.1585 - val_acc: 0.8596
Epoch 526/900
820/820 [==============================] - 0s 137us/step - loss: 0.1680 - acc: 0.7622 - val_loss: 0.1575 - val_acc: 0.8652
Epoch 527/900
820/820 [==============================] - 0s 133us/step - loss: 0.1681 - acc: 0.7622 - val_loss: 0.1568 - val_acc: 0.8652
Epoch 528/900
820/820 [==============================] - 0s 105us/step - loss: 0.1681 - acc: 0.7622 - val_loss: 0.1621 - val_acc: 0.8539
Epoch 529/900
820/820 [==============================] - 0s 96us/step - loss: 0.1681 - acc: 0.7646 - val_loss: 0.1591 - val_acc: 0.8596
Epoch 530/900
820/820 [==============================] - 0s 99us/step - loss: 0.1681 - acc: 0.7659 - val_loss: 0.1614 - val_acc: 0.8596
Epoch 531/900
820/820 [==============================] - 0s 98us/step - loss: 0.1681 - acc: 0.7659 - val_loss: 0.1615 - val_acc: 0.8539
Epoch 532/900
820/820 [==============================] - 0s 97us/step - loss: 0.1679 - acc: 0.7659 - val_loss: 0.1610 - val_acc: 0.8596
Epoch 533/900
820/820 [==============================] - 0s 99us/step - loss: 0.1680 - acc: 0.7622 - val_loss: 0.1604 - val_acc: 0.8596
Epoch 534/900
820/820 [==============================] - 0s 106us/step - loss: 0.1679 - acc: 0.7659 - val_loss: 0.1614 - val_acc: 0.8539
Epoch 535/900
820/820 [==============================] - 0s 105us/step - loss: 0.1680 - acc: 0.7622 - val_loss: 0.1577 - val_acc: 0.8652
Epoch 536/900
820/820 [==============================] - 0s 105us/step - loss: 0.1679 - acc: 0.7634 - val_loss: 0.1626 - val_acc: 0.8539
Epoch 537/900
820/820 [==============================] - 0s 104us/step - loss: 0.1680 - acc: 0.7646 - val_loss: 0.1601 - val_acc: 0.8596
Epoch 538/900
820/820 [==============================] - 0s 99us/step - loss: 0.1689 - acc: 0.7646 - val_loss: 0.1505 - val_acc: 0.8764
Epoch 539/900
820/820 [==============================] - 0s 96us/step - loss: 0.1686 - acc: 0.7671 - val_loss: 0.1666 - val_acc: 0.8483
Epoch 540/900
820/820 [==============================] - 0s 96us/step - loss: 0.1681 - acc: 0.7659 - val_loss: 0.1612 - val_acc: 0.8596
Epoch 541/900
820/820 [==============================] - 0s 95us/step - loss: 0.1682 - acc: 0.7634 - val_loss: 0.1524 - val_acc: 0.8708
Epoch 542/900
820/820 [==============================] - 0s 98us/step - loss: 0.1690 - acc: 0.7585 - val_loss: 0.1627 - val_acc: 0.8596
Epoch 543/900
820/820 [==============================] - 0s 103us/step - loss: 0.1682 - acc: 0.7622 - val_loss: 0.1512 - val_acc: 0.8708
Epoch 544/900
820/820 [==============================] - 0s 100us/step - loss: 0.1684 - acc: 0.7646 - val_loss: 0.1623 - val_acc: 0.8539
Epoch 545/900
820/820 [==============================] - 0s 100us/step - loss: 0.1683 - acc: 0.7634 - val_loss: 0.1601 - val_acc: 0.8596
Epoch 546/900
820/820 [==============================] - 0s 112us/step - loss: 0.1679 - acc: 0.7610 - val_loss: 0.1625 - val_acc: 0.8539
Epoch 547/900
820/820 [==============================] - 0s 106us/step - loss: 0.1682 - acc: 0.7610 - val_loss: 0.1635 - val_acc: 0.8539
Epoch 548/900
820/820 [==============================] - 0s 104us/step - loss: 0.1682 - acc: 0.7659 - val_loss: 0.1552 - val_acc: 0.8708
Epoch 549/900
820/820 [==============================] - 0s 103us/step - loss: 0.1680 - acc: 0.7634 - val_loss: 0.1624 - val_acc: 0.8539
Epoch 550/900
820/820 [==============================] - 0s 110us/step - loss: 0.1679 - acc: 0.7646 - val_loss: 0.1600 - val_acc: 0.8596
Epoch 551/900
820/820 [==============================] - 0s 111us/step - loss: 0.1677 - acc: 0.7634 - val_loss: 0.1551 - val_acc: 0.8708
Epoch 552/900
820/820 [==============================] - 0s 104us/step - loss: 0.1680 - acc: 0.7634 - val_loss: 0.1565 - val_acc: 0.8652
Epoch 553/900
820/820 [==============================] - 0s 108us/step - loss: 0.1676 - acc: 0.7659 - val_loss: 0.1662 - val_acc: 0.8539
Epoch 554/900
820/820 [==============================] - 0s 146us/step - loss: 0.1679 - acc: 0.7646 - val_loss: 0.1631 - val_acc: 0.8539
Epoch 555/900
820/820 [==============================] - 0s 114us/step - loss: 0.1680 - acc: 0.7646 - val_loss: 0.1620 - val_acc: 0.8539
Epoch 556/900
820/820 [==============================] - 0s 109us/step - loss: 0.1680 - acc: 0.7646 - val_loss: 0.1564 - val_acc: 0.8708
Epoch 557/900
820/820 [==============================] - 0s 114us/step - loss: 0.1677 - acc: 0.7610 - val_loss: 0.1642 - val_acc: 0.8539
Epoch 558/900
820/820 [==============================] - 0s 98us/step - loss: 0.1680 - acc: 0.7646 - val_loss: 0.1601 - val_acc: 0.8539
Epoch 559/900
820/820 [==============================] - 0s 110us/step - loss: 0.1676 - acc: 0.7659 - val_loss: 0.1655 - val_acc: 0.8539
Epoch 560/900
820/820 [==============================] - 0s 106us/step - loss: 0.1678 - acc: 0.7671 - val_loss: 0.1630 - val_acc: 0.8539
Epoch 561/900
820/820 [==============================] - 0s 103us/step - loss: 0.1677 - acc: 0.7646 - val_loss: 0.1592 - val_acc: 0.8596
Epoch 562/900
820/820 [==============================] - 0s 121us/step - loss: 0.1677 - acc: 0.7622 - val_loss: 0.1568 - val_acc: 0.8596
Epoch 563/900
820/820 [==============================] - 0s 99us/step - loss: 0.1676 - acc: 0.7634 - val_loss: 0.1605 - val_acc: 0.8539
Epoch 564/900
820/820 [==============================] - 0s 98us/step - loss: 0.1679 - acc: 0.7659 - val_loss: 0.1627 - val_acc: 0.8539
Epoch 565/900
820/820 [==============================] - 0s 106us/step - loss: 0.1678 - acc: 0.7634 - val_loss: 0.1526 - val_acc: 0.8708
Epoch 566/900
820/820 [==============================] - 0s 110us/step - loss: 0.1677 - acc: 0.7634 - val_loss: 0.1586 - val_acc: 0.8539
Epoch 567/900
820/820 [==============================] - 0s 96us/step - loss: 0.1681 - acc: 0.7622 - val_loss: 0.1707 - val_acc: 0.8427
Epoch 568/900
820/820 [==============================] - 0s 107us/step - loss: 0.1678 - acc: 0.7634 - val_loss: 0.1568 - val_acc: 0.8652
Epoch 569/900
820/820 [==============================] - 0s 110us/step - loss: 0.1677 - acc: 0.7622 - val_loss: 0.1542 - val_acc: 0.8708
Epoch 570/900
820/820 [==============================] - 0s 114us/step - loss: 0.1678 - acc: 0.7634 - val_loss: 0.1567 - val_acc: 0.8708
Epoch 571/900
820/820 [==============================] - 0s 111us/step - loss: 0.1675 - acc: 0.7659 - val_loss: 0.1626 - val_acc: 0.8539
Epoch 572/900
820/820 [==============================] - 0s 110us/step - loss: 0.1676 - acc: 0.7659 - val_loss: 0.1587 - val_acc: 0.8596
Epoch 573/900
820/820 [==============================] - 0s 102us/step - loss: 0.1678 - acc: 0.7659 - val_loss: 0.1632 - val_acc: 0.8539
Epoch 574/900
820/820 [==============================] - 0s 101us/step - loss: 0.1674 - acc: 0.7646 - val_loss: 0.1594 - val_acc: 0.8539
Epoch 575/900
820/820 [==============================] - 0s 109us/step - loss: 0.1676 - acc: 0.7646 - val_loss: 0.1633 - val_acc: 0.8539
Epoch 576/900
820/820 [==============================] - 0s 108us/step - loss: 0.1676 - acc: 0.7659 - val_loss: 0.1588 - val_acc: 0.8539
Epoch 577/900
820/820 [==============================] - 0s 100us/step - loss: 0.1680 - acc: 0.7634 - val_loss: 0.1555 - val_acc: 0.8708
Epoch 578/900
820/820 [==============================] - 0s 100us/step - loss: 0.1676 - acc: 0.7646 - val_loss: 0.1666 - val_acc: 0.8596
Epoch 579/900
820/820 [==============================] - 0s 98us/step - loss: 0.1682 - acc: 0.7646 - val_loss: 0.1672 - val_acc: 0.8596
Epoch 580/900
820/820 [==============================] - 0s 114us/step - loss: 0.1674 - acc: 0.7671 - val_loss: 0.1577 - val_acc: 0.8596
Epoch 581/900
820/820 [==============================] - 0s 100us/step - loss: 0.1678 - acc: 0.7634 - val_loss: 0.1517 - val_acc: 0.8708
Epoch 582/900
820/820 [==============================] - 0s 121us/step - loss: 0.1679 - acc: 0.7610 - val_loss: 0.1589 - val_acc: 0.8596
Epoch 583/900
820/820 [==============================] - 0s 114us/step - loss: 0.1675 - acc: 0.7671 - val_loss: 0.1607 - val_acc: 0.8539
Epoch 584/900
820/820 [==============================] - 0s 134us/step - loss: 0.1679 - acc: 0.7634 - val_loss: 0.1663 - val_acc: 0.8539
Epoch 585/900
820/820 [==============================] - 0s 105us/step - loss: 0.1677 - acc: 0.7646 - val_loss: 0.1540 - val_acc: 0.8708
Epoch 586/900
820/820 [==============================] - 0s 111us/step - loss: 0.1678 - acc: 0.7634 - val_loss: 0.1677 - val_acc: 0.8483
Epoch 587/900
820/820 [==============================] - 0s 96us/step - loss: 0.1676 - acc: 0.7646 - val_loss: 0.1581 - val_acc: 0.8652
Epoch 588/900
820/820 [==============================] - 0s 105us/step - loss: 0.1674 - acc: 0.7659 - val_loss: 0.1569 - val_acc: 0.8708
Epoch 589/900
820/820 [==============================] - 0s 105us/step - loss: 0.1674 - acc: 0.7634 - val_loss: 0.1593 - val_acc: 0.8539
Epoch 590/900
820/820 [==============================] - 0s 113us/step - loss: 0.1674 - acc: 0.7646 - val_loss: 0.1597 - val_acc: 0.8539
Epoch 591/900
820/820 [==============================] - 0s 112us/step - loss: 0.1675 - acc: 0.7659 - val_loss: 0.1653 - val_acc: 0.8539
Epoch 592/900
820/820 [==============================] - 0s 108us/step - loss: 0.1677 - acc: 0.7598 - val_loss: 0.1521 - val_acc: 0.8708
Epoch 593/900
820/820 [==============================] - 0s 125us/step - loss: 0.1675 - acc: 0.7659 - val_loss: 0.1612 - val_acc: 0.8539
Epoch 594/900
820/820 [==============================] - 0s 100us/step - loss: 0.1673 - acc: 0.7659 - val_loss: 0.1622 - val_acc: 0.8539
Epoch 595/900
820/820 [==============================] - 0s 98us/step - loss: 0.1673 - acc: 0.7659 - val_loss: 0.1517 - val_acc: 0.8708
Epoch 596/900
820/820 [==============================] - 0s 98us/step - loss: 0.1676 - acc: 0.7634 - val_loss: 0.1555 - val_acc: 0.8708
Epoch 597/900
820/820 [==============================] - 0s 96us/step - loss: 0.1674 - acc: 0.7646 - val_loss: 0.1644 - val_acc: 0.8539
Epoch 598/900
820/820 [==============================] - 0s 96us/step - loss: 0.1674 - acc: 0.7659 - val_loss: 0.1588 - val_acc: 0.8539
Epoch 599/900
820/820 [==============================] - 0s 99us/step - loss: 0.1672 - acc: 0.7646 - val_loss: 0.1572 - val_acc: 0.8708
Epoch 600/900
820/820 [==============================] - 0s 97us/step - loss: 0.1672 - acc: 0.7659 - val_loss: 0.1585 - val_acc: 0.8539
Epoch 601/900
820/820 [==============================] - 0s 148us/step - loss: 0.1676 - acc: 0.7622 - val_loss: 0.1641 - val_acc: 0.8596
Epoch 602/900
820/820 [==============================] - 0s 102us/step - loss: 0.1674 - acc: 0.7671 - val_loss: 0.1536 - val_acc: 0.8708
Epoch 603/900
820/820 [==============================] - 0s 96us/step - loss: 0.1673 - acc: 0.7622 - val_loss: 0.1615 - val_acc: 0.8539
Epoch 604/900
820/820 [==============================] - 0s 97us/step - loss: 0.1672 - acc: 0.7646 - val_loss: 0.1611 - val_acc: 0.8539
Epoch 605/900
820/820 [==============================] - 0s 107us/step - loss: 0.1675 - acc: 0.7610 - val_loss: 0.1546 - val_acc: 0.8708
Epoch 606/900
820/820 [==============================] - 0s 97us/step - loss: 0.1676 - acc: 0.7610 - val_loss: 0.1648 - val_acc: 0.8539
Epoch 607/900
820/820 [==============================] - 0s 97us/step - loss: 0.1673 - acc: 0.7659 - val_loss: 0.1557 - val_acc: 0.8708
Epoch 608/900
820/820 [==============================] - 0s 105us/step - loss: 0.1674 - acc: 0.7622 - val_loss: 0.1598 - val_acc: 0.8539
Epoch 609/900
820/820 [==============================] - 0s 119us/step - loss: 0.1672 - acc: 0.7634 - val_loss: 0.1540 - val_acc: 0.8652
Epoch 610/900
820/820 [==============================] - 0s 100us/step - loss: 0.1670 - acc: 0.7646 - val_loss: 0.1643 - val_acc: 0.8596
Epoch 611/900
820/820 [==============================] - 0s 102us/step - loss: 0.1672 - acc: 0.7646 - val_loss: 0.1593 - val_acc: 0.8596
Epoch 612/900
820/820 [==============================] - 0s 182us/step - loss: 0.1672 - acc: 0.7634 - val_loss: 0.1496 - val_acc: 0.8708
Epoch 613/900
820/820 [==============================] - 0s 187us/step - loss: 0.1675 - acc: 0.7622 - val_loss: 0.1586 - val_acc: 0.8596
Epoch 614/900
820/820 [==============================] - 0s 155us/step - loss: 0.1670 - acc: 0.7671 - val_loss: 0.1605 - val_acc: 0.8539
Epoch 615/900
820/820 [==============================] - 0s 154us/step - loss: 0.1671 - acc: 0.7646 - val_loss: 0.1577 - val_acc: 0.8652
Epoch 616/900
820/820 [==============================] - 0s 152us/step - loss: 0.1670 - acc: 0.7659 - val_loss: 0.1556 - val_acc: 0.8708
Epoch 617/900
820/820 [==============================] - 0s 165us/step - loss: 0.1670 - acc: 0.7671 - val_loss: 0.1572 - val_acc: 0.8652
Epoch 618/900
820/820 [==============================] - 0s 269us/step - loss: 0.1673 - acc: 0.7646 - val_loss: 0.1620 - val_acc: 0.8539
Epoch 619/900
820/820 [==============================] - 0s 118us/step - loss: 0.1669 - acc: 0.7659 - val_loss: 0.1610 - val_acc: 0.8539
Epoch 620/900
820/820 [==============================] - 0s 100us/step - loss: 0.1670 - acc: 0.7646 - val_loss: 0.1576 - val_acc: 0.8652
Epoch 621/900
820/820 [==============================] - 0s 98us/step - loss: 0.1669 - acc: 0.7634 - val_loss: 0.1611 - val_acc: 0.8539
Epoch 622/900
820/820 [==============================] - 0s 97us/step - loss: 0.1669 - acc: 0.7646 - val_loss: 0.1604 - val_acc: 0.8539
Epoch 623/900
820/820 [==============================] - 0s 110us/step - loss: 0.1677 - acc: 0.7622 - val_loss: 0.1512 - val_acc: 0.8708
Epoch 624/900
820/820 [==============================] - 0s 110us/step - loss: 0.1671 - acc: 0.7634 - val_loss: 0.1600 - val_acc: 0.8596
Epoch 625/900
820/820 [==============================] - 0s 189us/step - loss: 0.1669 - acc: 0.7634 - val_loss: 0.1622 - val_acc: 0.8596
Epoch 626/900
820/820 [==============================] - 0s 145us/step - loss: 0.1676 - acc: 0.7659 - val_loss: 0.1506 - val_acc: 0.8708
Epoch 627/900
820/820 [==============================] - 0s 133us/step - loss: 0.1671 - acc: 0.7671 - val_loss: 0.1634 - val_acc: 0.8539
Epoch 628/900
820/820 [==============================] - 0s 248us/step - loss: 0.1674 - acc: 0.7622 - val_loss: 0.1672 - val_acc: 0.8539
Epoch 629/900
820/820 [==============================] - 0s 154us/step - loss: 0.1670 - acc: 0.7671 - val_loss: 0.1575 - val_acc: 0.8652
Epoch 630/900
820/820 [==============================] - 0s 121us/step - loss: 0.1669 - acc: 0.7634 - val_loss: 0.1575 - val_acc: 0.8652
Epoch 631/900
820/820 [==============================] - 0s 105us/step - loss: 0.1670 - acc: 0.7646 - val_loss: 0.1636 - val_acc: 0.8539
Epoch 632/900
820/820 [==============================] - 0s 102us/step - loss: 0.1671 - acc: 0.7634 - val_loss: 0.1664 - val_acc: 0.8539
Epoch 633/900
820/820 [==============================] - 0s 100us/step - loss: 0.1668 - acc: 0.7659 - val_loss: 0.1586 - val_acc: 0.8596
Epoch 634/900
820/820 [==============================] - 0s 103us/step - loss: 0.1672 - acc: 0.7659 - val_loss: 0.1502 - val_acc: 0.8708
Epoch 635/900
820/820 [==============================] - 0s 108us/step - loss: 0.1671 - acc: 0.7634 - val_loss: 0.1605 - val_acc: 0.8539
Epoch 636/900
820/820 [==============================] - 0s 102us/step - loss: 0.1676 - acc: 0.7659 - val_loss: 0.1632 - val_acc: 0.8539
Epoch 637/900
820/820 [==============================] - 0s 100us/step - loss: 0.1669 - acc: 0.7659 - val_loss: 0.1486 - val_acc: 0.8708
Epoch 638/900
820/820 [==============================] - 0s 99us/step - loss: 0.1670 - acc: 0.7659 - val_loss: 0.1627 - val_acc: 0.8596
Epoch 639/900
820/820 [==============================] - 0s 99us/step - loss: 0.1671 - acc: 0.7634 - val_loss: 0.1675 - val_acc: 0.8539
Epoch 640/900
820/820 [==============================] - 0s 103us/step - loss: 0.1668 - acc: 0.7659 - val_loss: 0.1575 - val_acc: 0.8652
Epoch 641/900
820/820 [==============================] - 0s 104us/step - loss: 0.1673 - acc: 0.7659 - val_loss: 0.1546 - val_acc: 0.8708
Epoch 642/900
820/820 [==============================] - 0s 108us/step - loss: 0.1664 - acc: 0.7659 - val_loss: 0.1647 - val_acc: 0.8596
Epoch 643/900
820/820 [==============================] - 0s 114us/step - loss: 0.1668 - acc: 0.7659 - val_loss: 0.1592 - val_acc: 0.8596
Epoch 644/900
820/820 [==============================] - 0s 96us/step - loss: 0.1667 - acc: 0.7634 - val_loss: 0.1591 - val_acc: 0.8596
Epoch 645/900
820/820 [==============================] - 0s 103us/step - loss: 0.1667 - acc: 0.7646 - val_loss: 0.1556 - val_acc: 0.8652
Epoch 646/900
820/820 [==============================] - 0s 100us/step - loss: 0.1665 - acc: 0.7646 - val_loss: 0.1602 - val_acc: 0.8539
Epoch 647/900
820/820 [==============================] - 0s 103us/step - loss: 0.1666 - acc: 0.7634 - val_loss: 0.1618 - val_acc: 0.8539
Epoch 648/900
820/820 [==============================] - 0s 110us/step - loss: 0.1664 - acc: 0.7671 - val_loss: 0.1575 - val_acc: 0.8652
Epoch 649/900
820/820 [==============================] - 0s 103us/step - loss: 0.1666 - acc: 0.7659 - val_loss: 0.1606 - val_acc: 0.8539
Epoch 650/900
820/820 [==============================] - 0s 101us/step - loss: 0.1666 - acc: 0.7634 - val_loss: 0.1540 - val_acc: 0.8708
Epoch 651/900
820/820 [==============================] - 0s 103us/step - loss: 0.1667 - acc: 0.7634 - val_loss: 0.1561 - val_acc: 0.8652
Epoch 652/900
820/820 [==============================] - 0s 99us/step - loss: 0.1665 - acc: 0.7646 - val_loss: 0.1605 - val_acc: 0.8596
Epoch 653/900
820/820 [==============================] - 0s 100us/step - loss: 0.1664 - acc: 0.7671 - val_loss: 0.1569 - val_acc: 0.8652
Epoch 654/900
820/820 [==============================] - 0s 98us/step - loss: 0.1665 - acc: 0.7646 - val_loss: 0.1598 - val_acc: 0.8652
Epoch 655/900
820/820 [==============================] - 0s 98us/step - loss: 0.1681 - acc: 0.7610 - val_loss: 0.1678 - val_acc: 0.8483
Epoch 656/900
820/820 [==============================] - 0s 96us/step - loss: 0.1678 - acc: 0.7610 - val_loss: 0.1490 - val_acc: 0.8708
Epoch 657/900
820/820 [==============================] - 0s 106us/step - loss: 0.1666 - acc: 0.7622 - val_loss: 0.1599 - val_acc: 0.8652
Epoch 658/900
820/820 [==============================] - 0s 104us/step - loss: 0.1664 - acc: 0.7671 - val_loss: 0.1581 - val_acc: 0.8652
Epoch 659/900
820/820 [==============================] - 0s 95us/step - loss: 0.1668 - acc: 0.7622 - val_loss: 0.1551 - val_acc: 0.8652
Epoch 660/900
820/820 [==============================] - 0s 99us/step - loss: 0.1663 - acc: 0.7683 - val_loss: 0.1593 - val_acc: 0.8652
Epoch 661/900
820/820 [==============================] - 0s 99us/step - loss: 0.1664 - acc: 0.7634 - val_loss: 0.1624 - val_acc: 0.8596
Epoch 662/900
820/820 [==============================] - 0s 105us/step - loss: 0.1666 - acc: 0.7671 - val_loss: 0.1527 - val_acc: 0.8708
Epoch 663/900
820/820 [==============================] - 0s 98us/step - loss: 0.1666 - acc: 0.7695 - val_loss: 0.1578 - val_acc: 0.8596
Epoch 664/900
820/820 [==============================] - 0s 116us/step - loss: 0.1664 - acc: 0.7659 - val_loss: 0.1566 - val_acc: 0.8652
Epoch 665/900
820/820 [==============================] - 0s 98us/step - loss: 0.1668 - acc: 0.7646 - val_loss: 0.1565 - val_acc: 0.8652
Epoch 666/900
820/820 [==============================] - 0s 126us/step - loss: 0.1667 - acc: 0.7646 - val_loss: 0.1660 - val_acc: 0.8596
Epoch 667/900
820/820 [==============================] - 0s 95us/step - loss: 0.1663 - acc: 0.7634 - val_loss: 0.1536 - val_acc: 0.8652
Epoch 668/900
820/820 [==============================] - 0s 114us/step - loss: 0.1665 - acc: 0.7610 - val_loss: 0.1579 - val_acc: 0.8652
Epoch 669/900
820/820 [==============================] - 0s 100us/step - loss: 0.1663 - acc: 0.7659 - val_loss: 0.1585 - val_acc: 0.8652
Epoch 670/900
820/820 [==============================] - 0s 104us/step - loss: 0.1667 - acc: 0.7646 - val_loss: 0.1532 - val_acc: 0.8652
Epoch 671/900
820/820 [==============================] - 0s 98us/step - loss: 0.1664 - acc: 0.7646 - val_loss: 0.1646 - val_acc: 0.8596
Epoch 672/900
820/820 [==============================] - 0s 107us/step - loss: 0.1662 - acc: 0.7659 - val_loss: 0.1577 - val_acc: 0.8652
Epoch 673/900
820/820 [==============================] - 0s 105us/step - loss: 0.1667 - acc: 0.7671 - val_loss: 0.1493 - val_acc: 0.8708
Epoch 674/900
820/820 [==============================] - 0s 106us/step - loss: 0.1662 - acc: 0.7646 - val_loss: 0.1648 - val_acc: 0.8596
Epoch 675/900
820/820 [==============================] - 0s 101us/step - loss: 0.1664 - acc: 0.7671 - val_loss: 0.1630 - val_acc: 0.8596
Epoch 676/900
820/820 [==============================] - 0s 102us/step - loss: 0.1662 - acc: 0.7671 - val_loss: 0.1543 - val_acc: 0.8652
Epoch 677/900
820/820 [==============================] - 0s 111us/step - loss: 0.1661 - acc: 0.7671 - val_loss: 0.1602 - val_acc: 0.8596
Epoch 678/900
820/820 [==============================] - 0s 106us/step - loss: 0.1661 - acc: 0.7659 - val_loss: 0.1658 - val_acc: 0.8596
Epoch 679/900
820/820 [==============================] - 0s 97us/step - loss: 0.1660 - acc: 0.7671 - val_loss: 0.1558 - val_acc: 0.8708
Epoch 680/900
820/820 [==============================] - 0s 106us/step - loss: 0.1664 - acc: 0.7695 - val_loss: 0.1553 - val_acc: 0.8708
Epoch 681/900
820/820 [==============================] - 0s 110us/step - loss: 0.1668 - acc: 0.7671 - val_loss: 0.1514 - val_acc: 0.8708
Epoch 682/900
820/820 [==============================] - 0s 101us/step - loss: 0.1660 - acc: 0.7659 - val_loss: 0.1650 - val_acc: 0.8596
Epoch 683/900
820/820 [==============================] - 0s 99us/step - loss: 0.1669 - acc: 0.7683 - val_loss: 0.1640 - val_acc: 0.8596
Epoch 684/900
820/820 [==============================] - 0s 106us/step - loss: 0.1663 - acc: 0.7683 - val_loss: 0.1514 - val_acc: 0.8652
Epoch 685/900
820/820 [==============================] - 0s 103us/step - loss: 0.1662 - acc: 0.7683 - val_loss: 0.1584 - val_acc: 0.8708
Epoch 686/900
820/820 [==============================] - 0s 114us/step - loss: 0.1659 - acc: 0.7671 - val_loss: 0.1679 - val_acc: 0.8539
Epoch 687/900
820/820 [==============================] - 0s 108us/step - loss: 0.1662 - acc: 0.7695 - val_loss: 0.1631 - val_acc: 0.8596
Epoch 688/900
820/820 [==============================] - 0s 107us/step - loss: 0.1660 - acc: 0.7695 - val_loss: 0.1547 - val_acc: 0.8652
Epoch 689/900
820/820 [==============================] - 0s 113us/step - loss: 0.1661 - acc: 0.7634 - val_loss: 0.1589 - val_acc: 0.8652
Epoch 690/900
820/820 [==============================] - 0s 100us/step - loss: 0.1663 - acc: 0.7659 - val_loss: 0.1557 - val_acc: 0.8652
Epoch 691/900
820/820 [==============================] - 0s 108us/step - loss: 0.1660 - acc: 0.7634 - val_loss: 0.1713 - val_acc: 0.8427
Epoch 692/900
820/820 [==============================] - 0s 100us/step - loss: 0.1665 - acc: 0.7622 - val_loss: 0.1607 - val_acc: 0.8596
Epoch 693/900
820/820 [==============================] - 0s 109us/step - loss: 0.1661 - acc: 0.7659 - val_loss: 0.1565 - val_acc: 0.8708
Epoch 694/900
820/820 [==============================] - 0s 100us/step - loss: 0.1664 - acc: 0.7659 - val_loss: 0.1684 - val_acc: 0.8483
Epoch 695/900
820/820 [==============================] - 0s 101us/step - loss: 0.1661 - acc: 0.7695 - val_loss: 0.1572 - val_acc: 0.8708
Epoch 696/900
820/820 [==============================] - 0s 103us/step - loss: 0.1659 - acc: 0.7671 - val_loss: 0.1655 - val_acc: 0.8596
Epoch 697/900
820/820 [==============================] - 0s 102us/step - loss: 0.1661 - acc: 0.7695 - val_loss: 0.1571 - val_acc: 0.8652
Epoch 698/900
820/820 [==============================] - 0s 103us/step - loss: 0.1658 - acc: 0.7646 - val_loss: 0.1658 - val_acc: 0.8596
Epoch 699/900
820/820 [==============================] - 0s 101us/step - loss: 0.1658 - acc: 0.7646 - val_loss: 0.1565 - val_acc: 0.8652
Epoch 700/900
820/820 [==============================] - 0s 101us/step - loss: 0.1659 - acc: 0.7671 - val_loss: 0.1612 - val_acc: 0.8596
Epoch 701/900
820/820 [==============================] - 0s 103us/step - loss: 0.1658 - acc: 0.7671 - val_loss: 0.1659 - val_acc: 0.8596
Epoch 702/900
820/820 [==============================] - 0s 97us/step - loss: 0.1662 - acc: 0.7695 - val_loss: 0.1701 - val_acc: 0.8539
Epoch 703/900
820/820 [==============================] - 0s 104us/step - loss: 0.1659 - acc: 0.7659 - val_loss: 0.1581 - val_acc: 0.8708
Epoch 704/900
820/820 [==============================] - 0s 97us/step - loss: 0.1659 - acc: 0.7683 - val_loss: 0.1510 - val_acc: 0.8708
Epoch 705/900
820/820 [==============================] - 0s 95us/step - loss: 0.1661 - acc: 0.7671 - val_loss: 0.1642 - val_acc: 0.8596
Epoch 706/900
820/820 [==============================] - 0s 97us/step - loss: 0.1657 - acc: 0.7659 - val_loss: 0.1600 - val_acc: 0.8652
Epoch 707/900
820/820 [==============================] - 0s 97us/step - loss: 0.1656 - acc: 0.7659 - val_loss: 0.1573 - val_acc: 0.8708
Epoch 708/900
820/820 [==============================] - 0s 98us/step - loss: 0.1656 - acc: 0.7671 - val_loss: 0.1591 - val_acc: 0.8652
Epoch 709/900
820/820 [==============================] - 0s 99us/step - loss: 0.1670 - acc: 0.7707 - val_loss: 0.1478 - val_acc: 0.8708
Epoch 710/900
820/820 [==============================] - 0s 94us/step - loss: 0.1659 - acc: 0.7695 - val_loss: 0.1611 - val_acc: 0.8596
Epoch 711/900
820/820 [==============================] - 0s 101us/step - loss: 0.1657 - acc: 0.7659 - val_loss: 0.1691 - val_acc: 0.8483
Epoch 712/900
820/820 [==============================] - 0s 107us/step - loss: 0.1659 - acc: 0.7634 - val_loss: 0.1617 - val_acc: 0.8596
Epoch 713/900
820/820 [==============================] - 0s 104us/step - loss: 0.1662 - acc: 0.7659 - val_loss: 0.1640 - val_acc: 0.8596
Epoch 714/900
820/820 [==============================] - 0s 98us/step - loss: 0.1656 - acc: 0.7622 - val_loss: 0.1577 - val_acc: 0.8708
Epoch 715/900
820/820 [==============================] - 0s 107us/step - loss: 0.1655 - acc: 0.7683 - val_loss: 0.1561 - val_acc: 0.8708
Epoch 716/900
820/820 [==============================] - 0s 96us/step - loss: 0.1656 - acc: 0.7659 - val_loss: 0.1596 - val_acc: 0.8652
Epoch 717/900
820/820 [==============================] - 0s 96us/step - loss: 0.1663 - acc: 0.7671 - val_loss: 0.1526 - val_acc: 0.8708
Epoch 718/900
820/820 [==============================] - 0s 102us/step - loss: 0.1658 - acc: 0.7671 - val_loss: 0.1637 - val_acc: 0.8596
Epoch 719/900
820/820 [==============================] - 0s 100us/step - loss: 0.1656 - acc: 0.7622 - val_loss: 0.1582 - val_acc: 0.8652
Epoch 720/900
820/820 [==============================] - 0s 110us/step - loss: 0.1659 - acc: 0.7671 - val_loss: 0.1532 - val_acc: 0.8708
Epoch 721/900
820/820 [==============================] - 0s 111us/step - loss: 0.1656 - acc: 0.7659 - val_loss: 0.1621 - val_acc: 0.8596
Epoch 722/900
820/820 [==============================] - 0s 106us/step - loss: 0.1656 - acc: 0.7659 - val_loss: 0.1611 - val_acc: 0.8596
Epoch 723/900
820/820 [==============================] - 0s 102us/step - loss: 0.1655 - acc: 0.7634 - val_loss: 0.1620 - val_acc: 0.8596
Epoch 724/900
820/820 [==============================] - 0s 104us/step - loss: 0.1654 - acc: 0.7659 - val_loss: 0.1575 - val_acc: 0.8708
Epoch 725/900
820/820 [==============================] - 0s 99us/step - loss: 0.1654 - acc: 0.7695 - val_loss: 0.1586 - val_acc: 0.8708
Epoch 726/900
820/820 [==============================] - 0s 99us/step - loss: 0.1657 - acc: 0.7695 - val_loss: 0.1678 - val_acc: 0.8539
Epoch 727/900
820/820 [==============================] - 0s 111us/step - loss: 0.1655 - acc: 0.7671 - val_loss: 0.1544 - val_acc: 0.8708
Epoch 728/900
820/820 [==============================] - 0s 96us/step - loss: 0.1656 - acc: 0.7720 - val_loss: 0.1593 - val_acc: 0.8708
Epoch 729/900
820/820 [==============================] - 0s 98us/step - loss: 0.1659 - acc: 0.7659 - val_loss: 0.1660 - val_acc: 0.8596
Epoch 730/900
820/820 [==============================] - 0s 107us/step - loss: 0.1662 - acc: 0.7659 - val_loss: 0.1536 - val_acc: 0.8708
Epoch 731/900
820/820 [==============================] - 0s 113us/step - loss: 0.1652 - acc: 0.7659 - val_loss: 0.1642 - val_acc: 0.8596
Epoch 732/900
820/820 [==============================] - 0s 105us/step - loss: 0.1653 - acc: 0.7634 - val_loss: 0.1601 - val_acc: 0.8596
Epoch 733/900
820/820 [==============================] - 0s 112us/step - loss: 0.1652 - acc: 0.7683 - val_loss: 0.1586 - val_acc: 0.8708
Epoch 734/900
820/820 [==============================] - 0s 101us/step - loss: 0.1655 - acc: 0.7683 - val_loss: 0.1591 - val_acc: 0.8708
Epoch 735/900
820/820 [==============================] - 0s 105us/step - loss: 0.1650 - acc: 0.7671 - val_loss: 0.1671 - val_acc: 0.8539
Epoch 736/900
820/820 [==============================] - 0s 102us/step - loss: 0.1659 - acc: 0.7659 - val_loss: 0.1643 - val_acc: 0.8596
Epoch 737/900
820/820 [==============================] - 0s 102us/step - loss: 0.1654 - acc: 0.7671 - val_loss: 0.1557 - val_acc: 0.8708
Epoch 738/900
820/820 [==============================] - 0s 103us/step - loss: 0.1652 - acc: 0.7683 - val_loss: 0.1654 - val_acc: 0.8596
Epoch 739/900
820/820 [==============================] - 0s 96us/step - loss: 0.1656 - acc: 0.7671 - val_loss: 0.1591 - val_acc: 0.8652
Epoch 740/900
820/820 [==============================] - 0s 97us/step - loss: 0.1659 - acc: 0.7683 - val_loss: 0.1656 - val_acc: 0.8596
Epoch 741/900
820/820 [==============================] - 0s 95us/step - loss: 0.1657 - acc: 0.7659 - val_loss: 0.1515 - val_acc: 0.8708
Epoch 742/900
820/820 [==============================] - 0s 98us/step - loss: 0.1657 - acc: 0.7634 - val_loss: 0.1655 - val_acc: 0.8483
Epoch 743/900
820/820 [==============================] - 0s 111us/step - loss: 0.1653 - acc: 0.7646 - val_loss: 0.1580 - val_acc: 0.8708
Epoch 744/900
820/820 [==============================] - 0s 98us/step - loss: 0.1651 - acc: 0.7659 - val_loss: 0.1606 - val_acc: 0.8596
Epoch 745/900
820/820 [==============================] - 0s 99us/step - loss: 0.1651 - acc: 0.7659 - val_loss: 0.1611 - val_acc: 0.8596
Epoch 746/900
820/820 [==============================] - 0s 95us/step - loss: 0.1651 - acc: 0.7646 - val_loss: 0.1587 - val_acc: 0.8708
Epoch 747/900
820/820 [==============================] - 0s 98us/step - loss: 0.1653 - acc: 0.7695 - val_loss: 0.1539 - val_acc: 0.8652
Epoch 748/900
820/820 [==============================] - 0s 107us/step - loss: 0.1654 - acc: 0.7720 - val_loss: 0.1593 - val_acc: 0.8708
Epoch 749/900
820/820 [==============================] - 0s 99us/step - loss: 0.1651 - acc: 0.7671 - val_loss: 0.1596 - val_acc: 0.8652
Epoch 750/900
820/820 [==============================] - 0s 105us/step - loss: 0.1651 - acc: 0.7634 - val_loss: 0.1674 - val_acc: 0.8483
Epoch 751/900
820/820 [==============================] - 0s 99us/step - loss: 0.1653 - acc: 0.7671 - val_loss: 0.1668 - val_acc: 0.8539
Epoch 752/900
820/820 [==============================] - 0s 94us/step - loss: 0.1653 - acc: 0.7671 - val_loss: 0.1607 - val_acc: 0.8652
Epoch 753/900
820/820 [==============================] - 0s 97us/step - loss: 0.1650 - acc: 0.7659 - val_loss: 0.1689 - val_acc: 0.8427
Epoch 754/900
820/820 [==============================] - 0s 102us/step - loss: 0.1652 - acc: 0.7646 - val_loss: 0.1615 - val_acc: 0.8596
Epoch 755/900
820/820 [==============================] - 0s 99us/step - loss: 0.1665 - acc: 0.7659 - val_loss: 0.1507 - val_acc: 0.8652
Epoch 756/900
820/820 [==============================] - 0s 99us/step - loss: 0.1652 - acc: 0.7671 - val_loss: 0.1671 - val_acc: 0.8483
Epoch 757/900
820/820 [==============================] - 0s 96us/step - loss: 0.1655 - acc: 0.7671 - val_loss: 0.1665 - val_acc: 0.8483
Epoch 758/900
820/820 [==============================] - 0s 95us/step - loss: 0.1652 - acc: 0.7671 - val_loss: 0.1531 - val_acc: 0.8708
Epoch 759/900
820/820 [==============================] - 0s 103us/step - loss: 0.1655 - acc: 0.7695 - val_loss: 0.1553 - val_acc: 0.8708
Epoch 760/900
820/820 [==============================] - 0s 106us/step - loss: 0.1653 - acc: 0.7646 - val_loss: 0.1686 - val_acc: 0.8427
Epoch 761/900
820/820 [==============================] - 0s 104us/step - loss: 0.1650 - acc: 0.7683 - val_loss: 0.1562 - val_acc: 0.8708
Epoch 762/900
820/820 [==============================] - 0s 107us/step - loss: 0.1652 - acc: 0.7695 - val_loss: 0.1575 - val_acc: 0.8652
Epoch 763/900
820/820 [==============================] - 0s 100us/step - loss: 0.1651 - acc: 0.7707 - val_loss: 0.1601 - val_acc: 0.8652
Epoch 764/900
820/820 [==============================] - 0s 95us/step - loss: 0.1648 - acc: 0.7659 - val_loss: 0.1658 - val_acc: 0.8539
Epoch 765/900
820/820 [==============================] - 0s 94us/step - loss: 0.1650 - acc: 0.7659 - val_loss: 0.1616 - val_acc: 0.8596
Epoch 766/900
820/820 [==============================] - 0s 104us/step - loss: 0.1653 - acc: 0.7695 - val_loss: 0.1543 - val_acc: 0.8708
Epoch 767/900
820/820 [==============================] - 0s 99us/step - loss: 0.1650 - acc: 0.7659 - val_loss: 0.1629 - val_acc: 0.8596
Epoch 768/900
820/820 [==============================] - 0s 98us/step - loss: 0.1651 - acc: 0.7671 - val_loss: 0.1546 - val_acc: 0.8708
Epoch 769/900
820/820 [==============================] - 0s 96us/step - loss: 0.1648 - acc: 0.7683 - val_loss: 0.1635 - val_acc: 0.8539
Epoch 770/900
820/820 [==============================] - 0s 99us/step - loss: 0.1650 - acc: 0.7646 - val_loss: 0.1610 - val_acc: 0.8596
Epoch 771/900
820/820 [==============================] - 0s 105us/step - loss: 0.1652 - acc: 0.7707 - val_loss: 0.1572 - val_acc: 0.8708
Epoch 772/900
820/820 [==============================] - 0s 113us/step - loss: 0.1647 - acc: 0.7683 - val_loss: 0.1634 - val_acc: 0.8596
Epoch 773/900
820/820 [==============================] - 0s 97us/step - loss: 0.1650 - acc: 0.7646 - val_loss: 0.1679 - val_acc: 0.8483
Epoch 774/900
820/820 [==============================] - 0s 110us/step - loss: 0.1654 - acc: 0.7622 - val_loss: 0.1601 - val_acc: 0.8652
Epoch 775/900
820/820 [==============================] - 0s 97us/step - loss: 0.1649 - acc: 0.7646 - val_loss: 0.1614 - val_acc: 0.8652
Epoch 776/900
820/820 [==============================] - 0s 104us/step - loss: 0.1664 - acc: 0.7683 - val_loss: 0.1452 - val_acc: 0.8708
Epoch 777/900
820/820 [==============================] - 0s 100us/step - loss: 0.1650 - acc: 0.7659 - val_loss: 0.1637 - val_acc: 0.8539
Epoch 778/900
820/820 [==============================] - 0s 98us/step - loss: 0.1651 - acc: 0.7646 - val_loss: 0.1691 - val_acc: 0.8427
Epoch 779/900
820/820 [==============================] - 0s 103us/step - loss: 0.1651 - acc: 0.7683 - val_loss: 0.1627 - val_acc: 0.8539
Epoch 780/900
820/820 [==============================] - 0s 99us/step - loss: 0.1648 - acc: 0.7720 - val_loss: 0.1538 - val_acc: 0.8708
Epoch 781/900
820/820 [==============================] - 0s 96us/step - loss: 0.1649 - acc: 0.7683 - val_loss: 0.1588 - val_acc: 0.8652
Epoch 782/900
820/820 [==============================] - 0s 104us/step - loss: 0.1648 - acc: 0.7720 - val_loss: 0.1593 - val_acc: 0.8652
Epoch 783/900
820/820 [==============================] - 0s 100us/step - loss: 0.1648 - acc: 0.7683 - val_loss: 0.1647 - val_acc: 0.8539
Epoch 784/900
820/820 [==============================] - 0s 104us/step - loss: 0.1647 - acc: 0.7659 - val_loss: 0.1571 - val_acc: 0.8652
Epoch 785/900
820/820 [==============================] - 0s 99us/step - loss: 0.1651 - acc: 0.7695 - val_loss: 0.1562 - val_acc: 0.8652
Epoch 786/900
820/820 [==============================] - 0s 102us/step - loss: 0.1649 - acc: 0.7646 - val_loss: 0.1660 - val_acc: 0.8483
Epoch 787/900
820/820 [==============================] - 0s 97us/step - loss: 0.1648 - acc: 0.7707 - val_loss: 0.1565 - val_acc: 0.8652
Epoch 788/900
820/820 [==============================] - 0s 98us/step - loss: 0.1646 - acc: 0.7695 - val_loss: 0.1623 - val_acc: 0.8539
Epoch 789/900
820/820 [==============================] - 0s 107us/step - loss: 0.1647 - acc: 0.7622 - val_loss: 0.1595 - val_acc: 0.8652
Epoch 790/900
820/820 [==============================] - 0s 99us/step - loss: 0.1645 - acc: 0.7707 - val_loss: 0.1568 - val_acc: 0.8652
Epoch 791/900
820/820 [==============================] - 0s 108us/step - loss: 0.1646 - acc: 0.7683 - val_loss: 0.1591 - val_acc: 0.8652
Epoch 792/900
820/820 [==============================] - 0s 98us/step - loss: 0.1654 - acc: 0.7683 - val_loss: 0.1521 - val_acc: 0.8708
Epoch 793/900
820/820 [==============================] - 0s 106us/step - loss: 0.1649 - acc: 0.7671 - val_loss: 0.1671 - val_acc: 0.8539
Epoch 794/900
820/820 [==============================] - 0s 96us/step - loss: 0.1652 - acc: 0.7683 - val_loss: 0.1558 - val_acc: 0.8652
Epoch 795/900
820/820 [==============================] - 0s 114us/step - loss: 0.1648 - acc: 0.7683 - val_loss: 0.1706 - val_acc: 0.8371
Epoch 796/900
820/820 [==============================] - 0s 102us/step - loss: 0.1653 - acc: 0.7671 - val_loss: 0.1627 - val_acc: 0.8539
Epoch 797/900
820/820 [==============================] - 0s 104us/step - loss: 0.1651 - acc: 0.7732 - val_loss: 0.1483 - val_acc: 0.8764
Epoch 798/900
820/820 [==============================] - 0s 101us/step - loss: 0.1647 - acc: 0.7707 - val_loss: 0.1626 - val_acc: 0.8539
Epoch 799/900
820/820 [==============================] - 0s 108us/step - loss: 0.1645 - acc: 0.7683 - val_loss: 0.1605 - val_acc: 0.8539
Epoch 800/900
820/820 [==============================] - 0s 97us/step - loss: 0.1646 - acc: 0.7683 - val_loss: 0.1564 - val_acc: 0.8652
Epoch 801/900
820/820 [==============================] - 0s 98us/step - loss: 0.1645 - acc: 0.7695 - val_loss: 0.1577 - val_acc: 0.8596
Epoch 802/900
820/820 [==============================] - 0s 96us/step - loss: 0.1645 - acc: 0.7671 - val_loss: 0.1546 - val_acc: 0.8708
Epoch 803/900
820/820 [==============================] - 0s 96us/step - loss: 0.1646 - acc: 0.7659 - val_loss: 0.1627 - val_acc: 0.8539
Epoch 804/900
820/820 [==============================] - 0s 106us/step - loss: 0.1644 - acc: 0.7671 - val_loss: 0.1596 - val_acc: 0.8596
Epoch 805/900
820/820 [==============================] - 0s 96us/step - loss: 0.1650 - acc: 0.7683 - val_loss: 0.1517 - val_acc: 0.8652
Epoch 806/900
820/820 [==============================] - 0s 98us/step - loss: 0.1646 - acc: 0.7695 - val_loss: 0.1651 - val_acc: 0.8483
Epoch 807/900
820/820 [==============================] - 0s 102us/step - loss: 0.1650 - acc: 0.7695 - val_loss: 0.1633 - val_acc: 0.8539
Epoch 808/900
820/820 [==============================] - 0s 105us/step - loss: 0.1643 - acc: 0.7707 - val_loss: 0.1550 - val_acc: 0.8708
Epoch 809/900
820/820 [==============================] - 0s 107us/step - loss: 0.1644 - acc: 0.7695 - val_loss: 0.1605 - val_acc: 0.8596
Epoch 810/900
820/820 [==============================] - 0s 99us/step - loss: 0.1644 - acc: 0.7671 - val_loss: 0.1608 - val_acc: 0.8596
Epoch 811/900
820/820 [==============================] - 0s 111us/step - loss: 0.1647 - acc: 0.7659 - val_loss: 0.1539 - val_acc: 0.8652
Epoch 812/900
820/820 [==============================] - 0s 104us/step - loss: 0.1643 - acc: 0.7744 - val_loss: 0.1621 - val_acc: 0.8596
Epoch 813/900
820/820 [==============================] - 0s 105us/step - loss: 0.1644 - acc: 0.7646 - val_loss: 0.1659 - val_acc: 0.8539
Epoch 814/900
820/820 [==============================] - 0s 101us/step - loss: 0.1646 - acc: 0.7671 - val_loss: 0.1536 - val_acc: 0.8652
Epoch 815/900
820/820 [==============================] - 0s 96us/step - loss: 0.1644 - acc: 0.7695 - val_loss: 0.1631 - val_acc: 0.8539
Epoch 816/900
820/820 [==============================] - 0s 106us/step - loss: 0.1643 - acc: 0.7646 - val_loss: 0.1636 - val_acc: 0.8539
Epoch 817/900
820/820 [==============================] - 0s 95us/step - loss: 0.1642 - acc: 0.7683 - val_loss: 0.1565 - val_acc: 0.8652
Epoch 818/900
820/820 [==============================] - 0s 98us/step - loss: 0.1643 - acc: 0.7720 - val_loss: 0.1606 - val_acc: 0.8596
Epoch 819/900
820/820 [==============================] - 0s 102us/step - loss: 0.1642 - acc: 0.7671 - val_loss: 0.1619 - val_acc: 0.8596
Epoch 820/900
820/820 [==============================] - 0s 99us/step - loss: 0.1644 - acc: 0.7671 - val_loss: 0.1606 - val_acc: 0.8596
Epoch 821/900
820/820 [==============================] - 0s 99us/step - loss: 0.1647 - acc: 0.7671 - val_loss: 0.1641 - val_acc: 0.8483
Epoch 822/900
820/820 [==============================] - 0s 107us/step - loss: 0.1640 - acc: 0.7671 - val_loss: 0.1548 - val_acc: 0.8708
Epoch 823/900
820/820 [==============================] - 0s 103us/step - loss: 0.1644 - acc: 0.7707 - val_loss: 0.1579 - val_acc: 0.8596
Epoch 824/900
820/820 [==============================] - 0s 99us/step - loss: 0.1643 - acc: 0.7683 - val_loss: 0.1695 - val_acc: 0.8315
Epoch 825/900
820/820 [==============================] - 0s 100us/step - loss: 0.1640 - acc: 0.7683 - val_loss: 0.1563 - val_acc: 0.8708
Epoch 826/900
820/820 [==============================] - 0s 101us/step - loss: 0.1644 - acc: 0.7707 - val_loss: 0.1528 - val_acc: 0.8708
Epoch 827/900
820/820 [==============================] - 0s 96us/step - loss: 0.1643 - acc: 0.7732 - val_loss: 0.1602 - val_acc: 0.8652
Epoch 828/900
820/820 [==============================] - 0s 101us/step - loss: 0.1642 - acc: 0.7671 - val_loss: 0.1658 - val_acc: 0.8483
Epoch 829/900
820/820 [==============================] - 0s 97us/step - loss: 0.1643 - acc: 0.7695 - val_loss: 0.1649 - val_acc: 0.8483
Epoch 830/900
820/820 [==============================] - 0s 94us/step - loss: 0.1643 - acc: 0.7671 - val_loss: 0.1588 - val_acc: 0.8596
Epoch 831/900
820/820 [==============================] - 0s 97us/step - loss: 0.1649 - acc: 0.7683 - val_loss: 0.1497 - val_acc: 0.8652
Epoch 832/900
820/820 [==============================] - 0s 97us/step - loss: 0.1646 - acc: 0.7671 - val_loss: 0.1660 - val_acc: 0.8483
Epoch 833/900
820/820 [==============================] - 0s 97us/step - loss: 0.1643 - acc: 0.7695 - val_loss: 0.1616 - val_acc: 0.8596
Epoch 834/900
820/820 [==============================] - 0s 110us/step - loss: 0.1641 - acc: 0.7671 - val_loss: 0.1595 - val_acc: 0.8596
Epoch 835/900
820/820 [==============================] - 0s 99us/step - loss: 0.1641 - acc: 0.7671 - val_loss: 0.1606 - val_acc: 0.8596
Epoch 836/900
820/820 [==============================] - 0s 96us/step - loss: 0.1644 - acc: 0.7683 - val_loss: 0.1535 - val_acc: 0.8708
Epoch 837/900
820/820 [==============================] - 0s 103us/step - loss: 0.1640 - acc: 0.7707 - val_loss: 0.1630 - val_acc: 0.8539
Epoch 838/900
820/820 [==============================] - 0s 97us/step - loss: 0.1649 - acc: 0.7622 - val_loss: 0.1670 - val_acc: 0.8483
Epoch 839/900
820/820 [==============================] - 0s 102us/step - loss: 0.1641 - acc: 0.7671 - val_loss: 0.1596 - val_acc: 0.8596
Epoch 840/900
820/820 [==============================] - 0s 95us/step - loss: 0.1640 - acc: 0.7683 - val_loss: 0.1606 - val_acc: 0.8539
Epoch 841/900
820/820 [==============================] - 0s 104us/step - loss: 0.1645 - acc: 0.7707 - val_loss: 0.1543 - val_acc: 0.8652
Epoch 842/900
820/820 [==============================] - 0s 97us/step - loss: 0.1640 - acc: 0.7707 - val_loss: 0.1619 - val_acc: 0.8596
Epoch 843/900
820/820 [==============================] - 0s 105us/step - loss: 0.1640 - acc: 0.7671 - val_loss: 0.1601 - val_acc: 0.8596
Epoch 844/900
820/820 [==============================] - 0s 107us/step - loss: 0.1641 - acc: 0.7707 - val_loss: 0.1580 - val_acc: 0.8596
Epoch 845/900
820/820 [==============================] - 0s 97us/step - loss: 0.1641 - acc: 0.7659 - val_loss: 0.1643 - val_acc: 0.8483
Epoch 846/900
820/820 [==============================] - 0s 114us/step - loss: 0.1638 - acc: 0.7671 - val_loss: 0.1549 - val_acc: 0.8652
Epoch 847/900
820/820 [==============================] - 0s 104us/step - loss: 0.1643 - acc: 0.7707 - val_loss: 0.1595 - val_acc: 0.8596
Epoch 848/900
820/820 [==============================] - 0s 99us/step - loss: 0.1638 - acc: 0.7683 - val_loss: 0.1620 - val_acc: 0.8539
Epoch 849/900
820/820 [==============================] - 0s 103us/step - loss: 0.1646 - acc: 0.7683 - val_loss: 0.1665 - val_acc: 0.8483
Epoch 850/900
820/820 [==============================] - 0s 95us/step - loss: 0.1643 - acc: 0.7695 - val_loss: 0.1579 - val_acc: 0.8596
Epoch 851/900
820/820 [==============================] - 0s 100us/step - loss: 0.1638 - acc: 0.7683 - val_loss: 0.1602 - val_acc: 0.8596
Epoch 852/900
820/820 [==============================] - 0s 101us/step - loss: 0.1638 - acc: 0.7659 - val_loss: 0.1635 - val_acc: 0.8483
Epoch 853/900
820/820 [==============================] - 0s 100us/step - loss: 0.1640 - acc: 0.7683 - val_loss: 0.1579 - val_acc: 0.8596
Epoch 854/900
820/820 [==============================] - 0s 103us/step - loss: 0.1639 - acc: 0.7695 - val_loss: 0.1651 - val_acc: 0.8483
Epoch 855/900
820/820 [==============================] - 0s 106us/step - loss: 0.1642 - acc: 0.7683 - val_loss: 0.1661 - val_acc: 0.8427
Epoch 856/900
820/820 [==============================] - 0s 106us/step - loss: 0.1636 - acc: 0.7720 - val_loss: 0.1529 - val_acc: 0.8708
Epoch 857/900
820/820 [==============================] - 0s 115us/step - loss: 0.1640 - acc: 0.7707 - val_loss: 0.1598 - val_acc: 0.8596
Epoch 858/900
820/820 [==============================] - 0s 98us/step - loss: 0.1638 - acc: 0.7671 - val_loss: 0.1641 - val_acc: 0.8483
Epoch 859/900
820/820 [==============================] - 0s 97us/step - loss: 0.1637 - acc: 0.7659 - val_loss: 0.1607 - val_acc: 0.8596
Epoch 860/900
820/820 [==============================] - 0s 100us/step - loss: 0.1638 - acc: 0.7671 - val_loss: 0.1603 - val_acc: 0.8539
Epoch 861/900
820/820 [==============================] - 0s 99us/step - loss: 0.1637 - acc: 0.7683 - val_loss: 0.1576 - val_acc: 0.8596
Epoch 862/900
820/820 [==============================] - 0s 104us/step - loss: 0.1638 - acc: 0.7695 - val_loss: 0.1592 - val_acc: 0.8596
Epoch 863/900
820/820 [==============================] - 0s 97us/step - loss: 0.1638 - acc: 0.7683 - val_loss: 0.1590 - val_acc: 0.8596
Epoch 864/900
820/820 [==============================] - 0s 101us/step - loss: 0.1638 - acc: 0.7695 - val_loss: 0.1584 - val_acc: 0.8596
Epoch 865/900
820/820 [==============================] - 0s 105us/step - loss: 0.1644 - acc: 0.7671 - val_loss: 0.1619 - val_acc: 0.8539
Epoch 866/900
820/820 [==============================] - 0s 122us/step - loss: 0.1643 - acc: 0.7695 - val_loss: 0.1482 - val_acc: 0.8652
Epoch 867/900
820/820 [==============================] - 0s 107us/step - loss: 0.1643 - acc: 0.7695 - val_loss: 0.1597 - val_acc: 0.8596
Epoch 868/900
820/820 [==============================] - 0s 108us/step - loss: 0.1638 - acc: 0.7671 - val_loss: 0.1625 - val_acc: 0.8483
Epoch 869/900
820/820 [==============================] - 0s 103us/step - loss: 0.1640 - acc: 0.7707 - val_loss: 0.1627 - val_acc: 0.8483
Epoch 870/900
820/820 [==============================] - 0s 100us/step - loss: 0.1640 - acc: 0.7671 - val_loss: 0.1506 - val_acc: 0.8708
Epoch 871/900
820/820 [==============================] - 0s 102us/step - loss: 0.1645 - acc: 0.7720 - val_loss: 0.1541 - val_acc: 0.8652
Epoch 872/900
820/820 [==============================] - 0s 99us/step - loss: 0.1637 - acc: 0.7720 - val_loss: 0.1617 - val_acc: 0.8539
Epoch 873/900
820/820 [==============================] - 0s 99us/step - loss: 0.1636 - acc: 0.7695 - val_loss: 0.1568 - val_acc: 0.8596
Epoch 874/900
820/820 [==============================] - 0s 109us/step - loss: 0.1638 - acc: 0.7671 - val_loss: 0.1649 - val_acc: 0.8483
Epoch 875/900
820/820 [==============================] - 0s 111us/step - loss: 0.1638 - acc: 0.7695 - val_loss: 0.1574 - val_acc: 0.8596
Epoch 876/900
820/820 [==============================] - 0s 113us/step - loss: 0.1638 - acc: 0.7659 - val_loss: 0.1659 - val_acc: 0.8427
Epoch 877/900
820/820 [==============================] - 0s 100us/step - loss: 0.1638 - acc: 0.7671 - val_loss: 0.1560 - val_acc: 0.8596
Epoch 878/900
820/820 [==============================] - 0s 105us/step - loss: 0.1638 - acc: 0.7707 - val_loss: 0.1550 - val_acc: 0.8596
Epoch 879/900
820/820 [==============================] - 0s 96us/step - loss: 0.1640 - acc: 0.7671 - val_loss: 0.1630 - val_acc: 0.8483
Epoch 880/900
820/820 [==============================] - 0s 95us/step - loss: 0.1637 - acc: 0.7695 - val_loss: 0.1574 - val_acc: 0.8596
Epoch 881/900
820/820 [==============================] - 0s 106us/step - loss: 0.1635 - acc: 0.7683 - val_loss: 0.1590 - val_acc: 0.8596
Epoch 882/900
820/820 [==============================] - 0s 95us/step - loss: 0.1636 - acc: 0.7671 - val_loss: 0.1604 - val_acc: 0.8539
Epoch 883/900
820/820 [==============================] - 0s 97us/step - loss: 0.1636 - acc: 0.7646 - val_loss: 0.1578 - val_acc: 0.8539
Epoch 884/900
820/820 [==============================] - 0s 109us/step - loss: 0.1638 - acc: 0.7707 - val_loss: 0.1546 - val_acc: 0.8596
Epoch 885/900
820/820 [==============================] - 0s 97us/step - loss: 0.1635 - acc: 0.7683 - val_loss: 0.1638 - val_acc: 0.8483
Epoch 886/900
820/820 [==============================] - 0s 98us/step - loss: 0.1636 - acc: 0.7683 - val_loss: 0.1600 - val_acc: 0.8596
Epoch 887/900
820/820 [==============================] - 0s 103us/step - loss: 0.1635 - acc: 0.7707 - val_loss: 0.1553 - val_acc: 0.8596
Epoch 888/900
820/820 [==============================] - 0s 96us/step - loss: 0.1636 - acc: 0.7707 - val_loss: 0.1575 - val_acc: 0.8596
Epoch 889/900
820/820 [==============================] - 0s 97us/step - loss: 0.1633 - acc: 0.7683 - val_loss: 0.1633 - val_acc: 0.8539
Epoch 890/900
820/820 [==============================] - 0s 103us/step - loss: 0.1637 - acc: 0.7659 - val_loss: 0.1662 - val_acc: 0.8427
Epoch 891/900
820/820 [==============================] - 0s 98us/step - loss: 0.1637 - acc: 0.7659 - val_loss: 0.1550 - val_acc: 0.8596
Epoch 892/900
820/820 [==============================] - 0s 98us/step - loss: 0.1637 - acc: 0.7683 - val_loss: 0.1619 - val_acc: 0.8596
Epoch 893/900
820/820 [==============================] - 0s 108us/step - loss: 0.1634 - acc: 0.7707 - val_loss: 0.1586 - val_acc: 0.8596
Epoch 894/900
820/820 [==============================] - 0s 96us/step - loss: 0.1635 - acc: 0.7671 - val_loss: 0.1599 - val_acc: 0.8596
Epoch 895/900
820/820 [==============================] - 0s 98us/step - loss: 0.1634 - acc: 0.7720 - val_loss: 0.1596 - val_acc: 0.8539
Epoch 896/900
820/820 [==============================] - 0s 104us/step - loss: 0.1631 - acc: 0.7707 - val_loss: 0.1707 - val_acc: 0.8315
Epoch 897/900
820/820 [==============================] - 0s 104us/step - loss: 0.1637 - acc: 0.7646 - val_loss: 0.1638 - val_acc: 0.8539
Epoch 898/900
820/820 [==============================] - 0s 111us/step - loss: 0.1635 - acc: 0.7683 - val_loss: 0.1602 - val_acc: 0.8596
Epoch 899/900
820/820 [==============================] - 0s 98us/step - loss: 0.1638 - acc: 0.7634 - val_loss: 0.1665 - val_acc: 0.8427
Epoch 900/900
820/820 [==============================] - 0s 103us/step - loss: 0.1634 - acc: 0.7683 - val_loss: 0.1575 - val_acc: 0.8596
In [59]:
results = test17.evaluate(X_test1, Y_test2)
print(results)
top_layer = test17.layers[0]
plt.title('Visualize First Layer Filter 1')
plt.plot(top_layer.get_weights()[0][:, :, 0].squeeze())
plt.show()
plt.title('Visualize First Layer Filter 2')
plt.plot(top_layer.get_weights()[0][:, :, 1].squeeze())
plt.show()
177/177 [==============================] - 0s 132us/step
[0.16641162209591623, 0.8418079099412692]

Best Results for Set B - Data of length 500 Samples

The tradeoff with this dataset is that the segments are longer, but there is less data

We played around with the CNN models a lot, and ultimately found that when we were dealing with data segments of length 500, it was much more complicated than data of length 100. For example, using a similar model to the one that performed the best for Set A with 1 CNN layer and 1 Dense output layer, we only got ~81% and the filters were not shapes that we would expect. We found that adding more layers in this case improved the outcome a little, though the filters were still not so clear.

We used two dense layers and two 1D CNN Layer for our best result on this dataset --> Test Acc = ~ 86%

In [60]:
test37 = Sequential()
test37.add(Conv1D(4, (40),
                 activation='relu',
                 input_shape=(500,1)))
test37.add(MaxPooling1D(pool_size=(2)))
test37.add(Conv1D(4, (40),
                 activation='relu',
                 input_shape=(500,1)))
test37.add(Flatten())
test37.add(Dense(10, activation = 'sigmoid'))
test37.add(Dense(1, activation = 'sigmoid'))

print(test37.summary())

test37.compile(loss='mean_squared_error', optimizer='Adam',metrics=['accuracy'])
history = test37.fit(X_train_500,Y_train2_500, epochs=100, batch_size=500, validation_data=(X_val_500,Y_val2_500))
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d_5 (Conv1D)            (None, 461, 4)            164       
_________________________________________________________________
max_pooling1d_2 (MaxPooling1 (None, 230, 4)            0         
_________________________________________________________________
conv1d_6 (Conv1D)            (None, 191, 4)            644       
_________________________________________________________________
flatten_4 (Flatten)          (None, 764)               0         
_________________________________________________________________
dense_4 (Dense)              (None, 10)                7650      
_________________________________________________________________
dense_5 (Dense)              (None, 1)                 11        
=================================================================
Total params: 8,469
Trainable params: 8,469
Non-trainable params: 0
_________________________________________________________________
None
Train on 160 samples, validate on 38 samples
Epoch 1/100
160/160 [==============================] - 1s 7ms/step - loss: 0.2574 - acc: 0.5000 - val_loss: 0.2160 - val_acc: 0.7368
Epoch 2/100
160/160 [==============================] - 0s 720us/step - loss: 0.2565 - acc: 0.5000 - val_loss: 0.2163 - val_acc: 0.7368
Epoch 3/100
160/160 [==============================] - 0s 1ms/step - loss: 0.2552 - acc: 0.5000 - val_loss: 0.2168 - val_acc: 0.7368
Epoch 4/100
160/160 [==============================] - 0s 621us/step - loss: 0.2538 - acc: 0.5000 - val_loss: 0.2174 - val_acc: 0.7368
Epoch 5/100
160/160 [==============================] - 0s 545us/step - loss: 0.2523 - acc: 0.5063 - val_loss: 0.2181 - val_acc: 0.7368
Epoch 6/100
160/160 [==============================] - 0s 577us/step - loss: 0.2507 - acc: 0.5063 - val_loss: 0.2190 - val_acc: 0.7368
Epoch 7/100
160/160 [==============================] - 0s 545us/step - loss: 0.2489 - acc: 0.5250 - val_loss: 0.2201 - val_acc: 0.7368
Epoch 8/100
160/160 [==============================] - 0s 550us/step - loss: 0.2469 - acc: 0.5312 - val_loss: 0.2216 - val_acc: 0.7368
Epoch 9/100
160/160 [==============================] - 0s 540us/step - loss: 0.2448 - acc: 0.5500 - val_loss: 0.2236 - val_acc: 0.7895
Epoch 10/100
160/160 [==============================] - 0s 560us/step - loss: 0.2426 - acc: 0.5625 - val_loss: 0.2264 - val_acc: 0.7632
Epoch 11/100
160/160 [==============================] - 0s 598us/step - loss: 0.2404 - acc: 0.5875 - val_loss: 0.2303 - val_acc: 0.8158
Epoch 12/100
160/160 [==============================] - 0s 565us/step - loss: 0.2383 - acc: 0.6187 - val_loss: 0.2352 - val_acc: 0.8421
Epoch 13/100
160/160 [==============================] - 0s 576us/step - loss: 0.2365 - acc: 0.6438 - val_loss: 0.2405 - val_acc: 0.6579
Epoch 14/100
160/160 [==============================] - 0s 585us/step - loss: 0.2350 - acc: 0.6562 - val_loss: 0.2454 - val_acc: 0.5526
Epoch 15/100
160/160 [==============================] - 0s 574us/step - loss: 0.2337 - acc: 0.6438 - val_loss: 0.2494 - val_acc: 0.4737
Epoch 16/100
160/160 [==============================] - 0s 551us/step - loss: 0.2325 - acc: 0.6000 - val_loss: 0.2521 - val_acc: 0.4737
Epoch 17/100
160/160 [==============================] - 0s 527us/step - loss: 0.2313 - acc: 0.5938 - val_loss: 0.2533 - val_acc: 0.4211
Epoch 18/100
160/160 [==============================] - 0s 573us/step - loss: 0.2299 - acc: 0.6000 - val_loss: 0.2530 - val_acc: 0.4211
Epoch 19/100
160/160 [==============================] - 0s 515us/step - loss: 0.2284 - acc: 0.6187 - val_loss: 0.2514 - val_acc: 0.4211
Epoch 20/100
160/160 [==============================] - 0s 578us/step - loss: 0.2267 - acc: 0.6187 - val_loss: 0.2481 - val_acc: 0.5000
Epoch 21/100
160/160 [==============================] - 0s 570us/step - loss: 0.2249 - acc: 0.6313 - val_loss: 0.2438 - val_acc: 0.5789
Epoch 22/100
160/160 [==============================] - 0s 552us/step - loss: 0.2229 - acc: 0.6562 - val_loss: 0.2391 - val_acc: 0.7368
Epoch 23/100
160/160 [==============================] - 0s 593us/step - loss: 0.2210 - acc: 0.6750 - val_loss: 0.2351 - val_acc: 0.8158
Epoch 24/100
160/160 [==============================] - 0s 587us/step - loss: 0.2195 - acc: 0.6875 - val_loss: 0.2319 - val_acc: 0.8421
Epoch 25/100
160/160 [==============================] - 0s 591us/step - loss: 0.2181 - acc: 0.7000 - val_loss: 0.2291 - val_acc: 0.8684
Epoch 26/100
160/160 [==============================] - 0s 572us/step - loss: 0.2168 - acc: 0.7000 - val_loss: 0.2265 - val_acc: 0.8947
Epoch 27/100
160/160 [==============================] - 0s 588us/step - loss: 0.2156 - acc: 0.7000 - val_loss: 0.2240 - val_acc: 0.8947
Epoch 28/100
160/160 [==============================] - 0s 519us/step - loss: 0.2143 - acc: 0.6938 - val_loss: 0.2213 - val_acc: 0.8947
Epoch 29/100
160/160 [==============================] - 0s 602us/step - loss: 0.2130 - acc: 0.6938 - val_loss: 0.2184 - val_acc: 0.8947
Epoch 30/100
160/160 [==============================] - 0s 574us/step - loss: 0.2117 - acc: 0.7000 - val_loss: 0.2155 - val_acc: 0.8947
Epoch 31/100
160/160 [==============================] - 0s 629us/step - loss: 0.2104 - acc: 0.7000 - val_loss: 0.2127 - val_acc: 0.8947
Epoch 32/100
160/160 [==============================] - 0s 598us/step - loss: 0.2091 - acc: 0.7000 - val_loss: 0.2101 - val_acc: 0.8947
Epoch 33/100
160/160 [==============================] - 0s 725us/step - loss: 0.2079 - acc: 0.7000 - val_loss: 0.2076 - val_acc: 0.8947
Epoch 34/100
160/160 [==============================] - 0s 591us/step - loss: 0.2067 - acc: 0.7063 - val_loss: 0.2053 - val_acc: 0.8947
Epoch 35/100
160/160 [==============================] - 0s 563us/step - loss: 0.2056 - acc: 0.7000 - val_loss: 0.2033 - val_acc: 0.8947
Epoch 36/100
160/160 [==============================] - 0s 576us/step - loss: 0.2046 - acc: 0.7000 - val_loss: 0.2015 - val_acc: 0.8947
Epoch 37/100
160/160 [==============================] - 0s 536us/step - loss: 0.2036 - acc: 0.7125 - val_loss: 0.1998 - val_acc: 0.8947
Epoch 38/100
160/160 [==============================] - 0s 608us/step - loss: 0.2027 - acc: 0.7000 - val_loss: 0.1979 - val_acc: 0.8947
Epoch 39/100
160/160 [==============================] - 0s 543us/step - loss: 0.2019 - acc: 0.6938 - val_loss: 0.1957 - val_acc: 0.8947
Epoch 40/100
160/160 [==============================] - 0s 562us/step - loss: 0.2011 - acc: 0.6938 - val_loss: 0.1931 - val_acc: 0.8947
Epoch 41/100
160/160 [==============================] - 0s 676us/step - loss: 0.2003 - acc: 0.6938 - val_loss: 0.1901 - val_acc: 0.8947
Epoch 42/100
160/160 [==============================] - 0s 597us/step - loss: 0.1994 - acc: 0.6938 - val_loss: 0.1869 - val_acc: 0.8947
Epoch 43/100
160/160 [==============================] - 0s 582us/step - loss: 0.1986 - acc: 0.6938 - val_loss: 0.1838 - val_acc: 0.8947
Epoch 44/100
160/160 [==============================] - 0s 523us/step - loss: 0.1977 - acc: 0.7000 - val_loss: 0.1808 - val_acc: 0.8947
Epoch 45/100
160/160 [==============================] - 0s 522us/step - loss: 0.1970 - acc: 0.7000 - val_loss: 0.1779 - val_acc: 0.8947
Epoch 46/100
160/160 [==============================] - 0s 539us/step - loss: 0.1962 - acc: 0.7000 - val_loss: 0.1752 - val_acc: 0.8947
Epoch 47/100
160/160 [==============================] - 0s 581us/step - loss: 0.1955 - acc: 0.7000 - val_loss: 0.1727 - val_acc: 0.8947
Epoch 48/100
160/160 [==============================] - 0s 648us/step - loss: 0.1947 - acc: 0.7000 - val_loss: 0.1703 - val_acc: 0.8947
Epoch 49/100
160/160 [==============================] - 0s 604us/step - loss: 0.1940 - acc: 0.7000 - val_loss: 0.1681 - val_acc: 0.8947
Epoch 50/100
160/160 [==============================] - 0s 577us/step - loss: 0.1933 - acc: 0.7000 - val_loss: 0.1661 - val_acc: 0.8947
Epoch 51/100
160/160 [==============================] - 0s 572us/step - loss: 0.1926 - acc: 0.7063 - val_loss: 0.1643 - val_acc: 0.8947
Epoch 52/100
160/160 [==============================] - 0s 610us/step - loss: 0.1919 - acc: 0.7063 - val_loss: 0.1625 - val_acc: 0.8947
Epoch 53/100
160/160 [==============================] - 0s 529us/step - loss: 0.1912 - acc: 0.7063 - val_loss: 0.1609 - val_acc: 0.8947
Epoch 54/100
160/160 [==============================] - 0s 600us/step - loss: 0.1906 - acc: 0.7063 - val_loss: 0.1594 - val_acc: 0.8947
Epoch 55/100
160/160 [==============================] - 0s 648us/step - loss: 0.1900 - acc: 0.7063 - val_loss: 0.1579 - val_acc: 0.8947
Epoch 56/100
160/160 [==============================] - 0s 528us/step - loss: 0.1893 - acc: 0.7063 - val_loss: 0.1565 - val_acc: 0.8947
Epoch 57/100
160/160 [==============================] - 0s 569us/step - loss: 0.1887 - acc: 0.7063 - val_loss: 0.1552 - val_acc: 0.8947
Epoch 58/100
160/160 [==============================] - 0s 562us/step - loss: 0.1881 - acc: 0.7063 - val_loss: 0.1538 - val_acc: 0.8947
Epoch 59/100
160/160 [==============================] - 0s 555us/step - loss: 0.1875 - acc: 0.7063 - val_loss: 0.1524 - val_acc: 0.8947
Epoch 60/100
160/160 [==============================] - 0s 530us/step - loss: 0.1870 - acc: 0.7063 - val_loss: 0.1511 - val_acc: 0.8947
Epoch 61/100
160/160 [==============================] - 0s 517us/step - loss: 0.1864 - acc: 0.7063 - val_loss: 0.1498 - val_acc: 0.8947
Epoch 62/100
160/160 [==============================] - 0s 591us/step - loss: 0.1858 - acc: 0.7063 - val_loss: 0.1486 - val_acc: 0.8947
Epoch 63/100
160/160 [==============================] - 0s 591us/step - loss: 0.1853 - acc: 0.7125 - val_loss: 0.1474 - val_acc: 0.8947
Epoch 64/100
160/160 [==============================] - 0s 600us/step - loss: 0.1847 - acc: 0.7188 - val_loss: 0.1463 - val_acc: 0.8947
Epoch 65/100
160/160 [==============================] - 0s 617us/step - loss: 0.1842 - acc: 0.7188 - val_loss: 0.1453 - val_acc: 0.8947
Epoch 66/100
160/160 [==============================] - 0s 557us/step - loss: 0.1837 - acc: 0.7250 - val_loss: 0.1443 - val_acc: 0.8947
Epoch 67/100
160/160 [==============================] - 0s 567us/step - loss: 0.1832 - acc: 0.7250 - val_loss: 0.1434 - val_acc: 0.8947
Epoch 68/100
160/160 [==============================] - 0s 580us/step - loss: 0.1828 - acc: 0.7250 - val_loss: 0.1426 - val_acc: 0.8947
Epoch 69/100
160/160 [==============================] - 0s 585us/step - loss: 0.1823 - acc: 0.7250 - val_loss: 0.1418 - val_acc: 0.8947
Epoch 70/100
160/160 [==============================] - 0s 532us/step - loss: 0.1819 - acc: 0.7250 - val_loss: 0.1411 - val_acc: 0.8947
Epoch 71/100
160/160 [==============================] - 0s 572us/step - loss: 0.1814 - acc: 0.7250 - val_loss: 0.1403 - val_acc: 0.8947
Epoch 72/100
160/160 [==============================] - 0s 538us/step - loss: 0.1810 - acc: 0.7250 - val_loss: 0.1396 - val_acc: 0.8947
Epoch 73/100
160/160 [==============================] - 0s 715us/step - loss: 0.1806 - acc: 0.7250 - val_loss: 0.1388 - val_acc: 0.8947
Epoch 74/100
160/160 [==============================] - 0s 553us/step - loss: 0.1802 - acc: 0.7250 - val_loss: 0.1380 - val_acc: 0.8947
Epoch 75/100
160/160 [==============================] - 0s 529us/step - loss: 0.1798 - acc: 0.7250 - val_loss: 0.1371 - val_acc: 0.8947
Epoch 76/100
160/160 [==============================] - 0s 636us/step - loss: 0.1795 - acc: 0.7250 - val_loss: 0.1362 - val_acc: 0.8947
Epoch 77/100
160/160 [==============================] - 0s 557us/step - loss: 0.1791 - acc: 0.7312 - val_loss: 0.1354 - val_acc: 0.8947
Epoch 78/100
160/160 [==============================] - 0s 596us/step - loss: 0.1788 - acc: 0.7312 - val_loss: 0.1345 - val_acc: 0.8947
Epoch 79/100
160/160 [==============================] - 0s 525us/step - loss: 0.1785 - acc: 0.7375 - val_loss: 0.1337 - val_acc: 0.8947
Epoch 80/100
160/160 [==============================] - 0s 579us/step - loss: 0.1782 - acc: 0.7375 - val_loss: 0.1330 - val_acc: 0.8947
Epoch 81/100
160/160 [==============================] - 0s 545us/step - loss: 0.1779 - acc: 0.7375 - val_loss: 0.1323 - val_acc: 0.8947
Epoch 82/100
160/160 [==============================] - 0s 534us/step - loss: 0.1776 - acc: 0.7437 - val_loss: 0.1316 - val_acc: 0.8947
Epoch 83/100
160/160 [==============================] - 0s 534us/step - loss: 0.1773 - acc: 0.7437 - val_loss: 0.1309 - val_acc: 0.8947
Epoch 84/100
160/160 [==============================] - 0s 605us/step - loss: 0.1770 - acc: 0.7500 - val_loss: 0.1303 - val_acc: 0.8947
Epoch 85/100
160/160 [==============================] - 0s 559us/step - loss: 0.1767 - acc: 0.7500 - val_loss: 0.1296 - val_acc: 0.8947
Epoch 86/100
160/160 [==============================] - 0s 626us/step - loss: 0.1765 - acc: 0.7500 - val_loss: 0.1290 - val_acc: 0.8947
Epoch 87/100
160/160 [==============================] - 0s 586us/step - loss: 0.1762 - acc: 0.7563 - val_loss: 0.1284 - val_acc: 0.8947
Epoch 88/100
160/160 [==============================] - 0s 529us/step - loss: 0.1760 - acc: 0.7563 - val_loss: 0.1278 - val_acc: 0.8947
Epoch 89/100
160/160 [==============================] - 0s 528us/step - loss: 0.1757 - acc: 0.7563 - val_loss: 0.1272 - val_acc: 0.8947
Epoch 90/100
160/160 [==============================] - 0s 514us/step - loss: 0.1755 - acc: 0.7563 - val_loss: 0.1266 - val_acc: 0.8947
Epoch 91/100
160/160 [==============================] - 0s 558us/step - loss: 0.1753 - acc: 0.7563 - val_loss: 0.1262 - val_acc: 0.8947
Epoch 92/100
160/160 [==============================] - 0s 527us/step - loss: 0.1750 - acc: 0.7563 - val_loss: 0.1257 - val_acc: 0.8947
Epoch 93/100
160/160 [==============================] - 0s 524us/step - loss: 0.1748 - acc: 0.7563 - val_loss: 0.1253 - val_acc: 0.8947
Epoch 94/100
160/160 [==============================] - 0s 578us/step - loss: 0.1746 - acc: 0.7563 - val_loss: 0.1249 - val_acc: 0.8947
Epoch 95/100
160/160 [==============================] - 0s 594us/step - loss: 0.1744 - acc: 0.7563 - val_loss: 0.1245 - val_acc: 0.8947
Epoch 96/100
160/160 [==============================] - 0s 551us/step - loss: 0.1742 - acc: 0.7563 - val_loss: 0.1242 - val_acc: 0.8947
Epoch 97/100
160/160 [==============================] - 0s 613us/step - loss: 0.1740 - acc: 0.7563 - val_loss: 0.1238 - val_acc: 0.8947
Epoch 98/100
160/160 [==============================] - 0s 550us/step - loss: 0.1738 - acc: 0.7563 - val_loss: 0.1235 - val_acc: 0.8947
Epoch 99/100
160/160 [==============================] - 0s 514us/step - loss: 0.1736 - acc: 0.7563 - val_loss: 0.1232 - val_acc: 0.8947
Epoch 100/100
160/160 [==============================] - 0s 515us/step - loss: 0.1734 - acc: 0.7563 - val_loss: 0.1230 - val_acc: 0.8947
In [61]:
results = test37.evaluate(X_test_500, Y_test2_500)
print(results)
top_layer = test37.layers[0]
plt.title('Visualize First Layer Filter 1')
plt.plot(top_layer.get_weights()[0][:, :, 0].squeeze())
plt.show()
plt.title('Visualize First Layer Filter 2')
plt.plot(top_layer.get_weights()[0][:, :, 1].squeeze())
plt.show()
37/37 [==============================] - 0s 376us/step
[0.1519212432809778, 0.8648648648648649]

These results are discussed more in the conclusion.

Latent and Generative Modeling

In [62]:
X_all = np.vstack([X_train2_500,X_val2_500,X_test2_500]).reshape((235, 500,1,1))
Y_all = np.vstack([Y_train2_500,Y_val2_500,Y_test2_500])
print('All examples shape:',X_all.shape,'All lables shape',Y_all.shape)
print('\nNote: While our data is 1D, we will add/remove dumby dimensions to make it easier to pass our data through')
print('generative models in the Keras API. For instance, Conv1DTranspose is not defined. In the future, we might')
print('consider implementing these layers/architectures for 1D data and submitting a pull request to the Keras git,')
print('but in the near-term we found this task to be out of scope.')
All examples shape: (235, 500, 1, 1) All lables shape (235, 1)

Note: While our data is 1D, we will add/remove dumby dimensions to make it easier to pass our data through
generative models in the Keras API. For instance, Conv1DTranspose is not defined. In the future, we might
consider implementing these layers/architectures for 1D data and submitting a pull request to the Keras git,
but in the near-term we found this task to be out of scope.
In [63]:
def sampling(args):
    z_mean, z_log_var = args
    epsilon = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim),
                              mean=0., stddev=epsilon_std)
    return z_mean + K.exp(z_log_var) * epsilon

def vae_loss(y_true, y_pred):
    """ Calculate loss = reconstruction loss + KL loss for each data in minibatch """
    # E[log P(X|z)]
    recon = K.sum(K.binary_crossentropy(y_pred, y_true), axis=1)
    # D_KL(Q(z|X) || P(z|X)); calculate in closed form as both dist. are Gaussian
    kl = 0.5 * K.sum(K.exp(z_log_var) + K.square(z_mean) - 1. - z_log_var, axis=1)

    return recon + kl

def shuffle_data(X_all,Y_all):
    idx = np.random.permutation(X_all.shape[0])
    X_all,Y_all = X_all[idx], Y_all[idx]

    split1=0.5
    s2=0.25
    trainX,trainY = (X_all[:int(260*split1)],Y_all[:int(260*split1)])
    valX,valY = (X_all[int(260*split1):int(260*split1)+int(260*s2)],Y_all[int(260*split1):int(260*split1)+int(260*s2)])
    testX,testY = (X_all[int(260*split1)+int(260*s2):],Y_all[int(260*split1)+int(260*s2):])
    return trainX,trainY,valX,valY,testX,testY,X_all,Y_all

Having developed a CNN architecture that classifies our stressed and relaxed EDA samples with over 80% accuracy, we now focus on effectively projecting our 500-dimensional samples onto a smaller-dimension latent space. Our intention is to be able to separate stressed and relaxed states in a smaller dimension, and we try to decide between points in the latent space using both supervised and unsupervised learning. Specifically, we run kMeans clustering and kNN classification after projecting onto our 500D signal onto 2D and 3D latent spaces. Potential applications of this approach, and the accuracies of classifying in these latent spaces is discussed after the plots of the latent space below. Note that we do a gridsearch over loss functions (reconstruction loss, binary_crossentropy, and mse) to compare their accuracies. Note that we also uniformly shuffle our dataset when training our models in order to prevent overfitting, and expose our models to data in such a way to maximize their generalizability. For example, if we trained our model first on just signals belonging to the stressed class and then on just signals belonging to the relaxed class, then it will likely fail to generalize well due to the stability-plasticity dynamics of the network.

Project onto 2D Latent Space

In [64]:
batch_size = 20
signal_shape = (500, 1, 1)

latent_dim = 2
intermediate_dim = 5
epsilon_std = 1.0

x = Input(shape=signal_shape)

conv_1 = Conv2D(2,
                kernel_size=(40,1),activation='relu')(x)


flat = layers.Flatten()(conv_1)
hidden = Dense(intermediate_dim, activation='sigmoid')(flat)

z_mean = Dense(latent_dim)(hidden)
z_log_var = Dense(latent_dim)(hidden)

z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])

decoder_hid = Dense(latent_dim, activation='relu')

intermed = Dense(intermediate_dim, activation='relu')
    
decoder_upsample = Dense(2 * 461 * 1, activation='relu')

output_shape = (batch_size, 461, 1, 2)
decoder_reshape = layers.Reshape(output_shape[1:])


decoder_mean = Conv2DTranspose(1,
                             kernel_size=(40, 1),
                             padding='valid',
                             activation='relu')

hid_decoded1 = decoder_hid(z)
hid_decoded = intermed(hid_decoded1)
up_decoded = decoder_upsample(hid_decoded)
reshape_decoded = decoder_reshape(up_decoded)
x_decoded_mean_squash=decoder_mean(reshape_decoded)

# instantiate VAE model
vae = Model(x, x_decoded_mean_squash)
vae.summary()
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            (None, 500, 1, 1)    0                                            
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 461, 1, 2)    82          input_1[0][0]                    
__________________________________________________________________________________________________
flatten_5 (Flatten)             (None, 922)          0           conv2d_1[0][0]                   
__________________________________________________________________________________________________
dense_6 (Dense)                 (None, 5)            4615        flatten_5[0][0]                  
__________________________________________________________________________________________________
dense_7 (Dense)                 (None, 2)            12          dense_6[0][0]                    
__________________________________________________________________________________________________
dense_8 (Dense)                 (None, 2)            12          dense_6[0][0]                    
__________________________________________________________________________________________________
lambda_1 (Lambda)               (None, 2)            0           dense_7[0][0]                    
                                                                 dense_8[0][0]                    
__________________________________________________________________________________________________
dense_9 (Dense)                 (None, 2)            6           lambda_1[0][0]                   
__________________________________________________________________________________________________
dense_10 (Dense)                (None, 5)            15          dense_9[0][0]                    
__________________________________________________________________________________________________
dense_11 (Dense)                (None, 922)          5532        dense_10[0][0]                   
__________________________________________________________________________________________________
reshape_1 (Reshape)             (None, 461, 1, 2)    0           dense_11[0][0]                   
__________________________________________________________________________________________________
conv2d_transpose_1 (Conv2DTrans (None, 500, 1, 1)    81          reshape_1[0][0]                  
==================================================================================================
Total params: 10,355
Trainable params: 10,355
Non-trainable params: 0
__________________________________________________________________________________________________
In [65]:
trainX,trainY,valX,valY,testX,testY,X_all,Y_all=shuffle_data(X_all,Y_all)

clust_err = 0
class_err = 0
count3 = 0
for i,loss in enumerate([vae_loss,'binary_crossentropy','mean_squared_error']):    
    if i == 0:
        title = 'vae_loss'
        print('\ntrained using vae_loss\n')
    else:
        title = str(loss)
        print('\ntrained using',str(loss))
    vae = Model(x, x_decoded_mean_squash)
    vae.compile(optimizer='rmsprop',loss=loss)
    vae.fit(trainX, trainX, epochs=100, batch_size=20,verbose=False)
    encoder = Model(x, z_mean)
    train_encode = encoder.predict(trainX, batch_size=20)
    for train, label,div in [(valX,valY,'val'),(testX,testY,'test'),(X_all,Y_all,'all')]:
        print('encoding the',div,'set')
        label =label.reshape(label.shape[0])
        x_test_encoded = encoder.predict(train, batch_size=20)
        print('data_shape:',x_test_encoded.shape,'\n')

        kmeans = KMeans(n_clusters=2, random_state=0).fit(x_test_encoded)
        neigh = KNeighborsClassifier(n_neighbors=2)
        
        nlabel = kmeans.predict(x_test_encoded)
        
        print(train_encode.shape,trainY.shape,x_test_encoded.shape)
        neigh.fit(train_encode,trainY.reshape((trainY.shape[0],)))
        pred=neigh.predict(x_test_encoded)

        count = 0
        cnt2 = 0
        for i in range(label.shape[0]):
            p1=x_test_encoded[i][0]
            p2=x_test_encoded[i][1]
            if label[i] == nlabel[i]:
                count+=1
            if label[i] == pred[i]:
                cnt2+=1
            '''if label[i]:
                plt.scatter(p1,p2,c='r')
            else:
                plt.scatter(p1,p2,c='b')'''
        
        '''title2 = title + ' ' + div
        plt.title(title2)
        plt.xlabel('latent dimension 1')
        plt.ylabel('latent dimension 2')
        
        print('labeled stress states')
        plt.show(0)
    
        title2 = title + ' ' + div
        plt.title(title2)
        plt.scatter(x_test_encoded[:,0],x_test_encoded[:,1], c=nlabel, s=65, cmap='viridis')
        print('clustered stress states')
        plt.show(1)'''
        
        print('clustering acc:',count/label.shape[0],'\n')
        clust_err += count/label.shape[0]

        '''for i in range(pred.shape[0]):
            p1=x_test_encoded[i][0]
            p2=x_test_encoded[i][1]
            if pred[i] == 1:
                plt.scatter(p1,p2,c='m')
            else:
                plt.scatter(p1,p2,c='c')'''
                
        '''title2 = title + ' ' + div
        plt.title(title2)
        plt.xlabel('latent dimension 1')
        plt.ylabel('latent dimension 2')
        plt.show(2)'''
        print('knn-clasified stress states')
        print('classification acc:',cnt2/label.shape[0],'\n')
        class_err += cnt2/label.shape[0]
        count3 +=1
trained using vae_loss

encoding the val set
data_shape: (65, 2) 

(130, 2) (130, 1) (65, 2)
clustering acc: 0.6153846153846154 

knn-clasified stress states
classification acc: 0.5692307692307692 

encoding the test set
data_shape: (40, 2) 

(130, 2) (130, 1) (40, 2)
clustering acc: 0.575 

knn-clasified stress states
classification acc: 0.625 

encoding the all set
data_shape: (235, 2) 

(130, 2) (130, 1) (235, 2)
clustering acc: 0.6 

knn-clasified stress states
classification acc: 0.7319148936170212 


trained using binary_crossentropy
encoding the val set
data_shape: (65, 2) 

(130, 2) (130, 1) (65, 2)
clustering acc: 0.5692307692307692 

knn-clasified stress states
classification acc: 0.6923076923076923 

encoding the test set
data_shape: (40, 2) 

(130, 2) (130, 1) (40, 2)
clustering acc: 0.275 

knn-clasified stress states
classification acc: 0.675 

encoding the all set
data_shape: (235, 2) 

(130, 2) (130, 1) (235, 2)
clustering acc: 0.37446808510638296 

knn-clasified stress states
classification acc: 0.7489361702127659 


trained using mean_squared_error
encoding the val set
data_shape: (65, 2) 

(130, 2) (130, 1) (65, 2)
clustering acc: 0.4307692307692308 

knn-clasified stress states
classification acc: 0.6153846153846154 

encoding the test set
data_shape: (40, 2) 

(130, 2) (130, 1) (40, 2)
clustering acc: 0.325 

knn-clasified stress states
classification acc: 0.7 

encoding the all set
data_shape: (235, 2) 

(130, 2) (130, 1) (235, 2)
clustering acc: 0.6553191489361702 

knn-clasified stress states
classification acc: 0.723404255319149 

In [66]:
class_err/9,clust_err/9
Out[66]:
(0.6756864884524458, 0.4911302054919077)
In [67]:
batch_size = 20
signal_shape = (500, 1, 1)

latent_dim = 100
intermediate_dim = 100
epsilon_std = 1.0

x = Input(shape=signal_shape)

conv_1 = Conv2D(2,
                kernel_size=(40,1),activation='relu')(x)


flat = layers.Flatten()(conv_1)
hidden = Dense(intermediate_dim, activation='sigmoid')(flat)

z_mean = Dense(latent_dim)(hidden)
z_log_var = Dense(latent_dim)(hidden)

z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])

decoder_hid = Dense(latent_dim, activation='relu')

intermed = Dense(intermediate_dim, activation='relu')
    
decoder_upsample = Dense(2 * 461 * 1, activation='relu')

output_shape = (batch_size, 461, 1, 2)
decoder_reshape = layers.Reshape(output_shape[1:])


decoder_mean = Conv2DTranspose(1,
                             kernel_size=(40, 1),
                             padding='valid',
                             activation='relu')

hid_decoded1 = decoder_hid(z)
hid_decoded = intermed(hid_decoded1)
up_decoded = decoder_upsample(hid_decoded)
reshape_decoded = decoder_reshape(up_decoded)
x_decoded_mean_squash=decoder_mean(reshape_decoded)

# instantiate VAE model
vae = Model(x, x_decoded_mean_squash)
vae.summary()
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_2 (InputLayer)            (None, 500, 1, 1)    0                                            
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 461, 1, 2)    82          input_2[0][0]                    
__________________________________________________________________________________________________
flatten_6 (Flatten)             (None, 922)          0           conv2d_2[0][0]                   
__________________________________________________________________________________________________
dense_12 (Dense)                (None, 100)          92300       flatten_6[0][0]                  
__________________________________________________________________________________________________
dense_13 (Dense)                (None, 100)          10100       dense_12[0][0]                   
__________________________________________________________________________________________________
dense_14 (Dense)                (None, 100)          10100       dense_12[0][0]                   
__________________________________________________________________________________________________
lambda_2 (Lambda)               (None, 100)          0           dense_13[0][0]                   
                                                                 dense_14[0][0]                   
__________________________________________________________________________________________________
dense_15 (Dense)                (None, 100)          10100       lambda_2[0][0]                   
__________________________________________________________________________________________________
dense_16 (Dense)                (None, 100)          10100       dense_15[0][0]                   
__________________________________________________________________________________________________
dense_17 (Dense)                (None, 922)          93122       dense_16[0][0]                   
__________________________________________________________________________________________________
reshape_2 (Reshape)             (None, 461, 1, 2)    0           dense_17[0][0]                   
__________________________________________________________________________________________________
conv2d_transpose_2 (Conv2DTrans (None, 500, 1, 1)    81          reshape_2[0][0]                  
==================================================================================================
Total params: 225,985
Trainable params: 225,985
Non-trainable params: 0
__________________________________________________________________________________________________
In [68]:
trainX,trainY,valX,valY,testX,testY,X_all,Y_all=shuffle_data(X_all,Y_all)

clust_err = 0
class_err = 0
count3 = 0
for i,loss in enumerate([vae_loss,'binary_crossentropy','mean_squared_error']):    
    if i == 0:
        title = 'vae_loss'
        print('\ntrained using vae_loss\n')
    else:
        title = str(loss)
        print('\ntrained using',str(loss))
    vae = Model(x, x_decoded_mean_squash)
    vae.compile(optimizer='rmsprop',loss=loss)
    vae.fit(trainX, trainX, epochs=100, batch_size=20,verbose=False)
    encoder = Model(x, z_mean)
    train_encode = encoder.predict(trainX, batch_size=20)
    for train, label,div in [(valX,valY,'val'),(testX,testY,'test'),(X_all,Y_all,'all')]:
        print('encoding the',div,'set')
        label =label.reshape(label.shape[0])
        x_test_encoded = encoder.predict(train, batch_size=20)
        print('data_shape:',x_test_encoded.shape,'\n')

        kmeans = KMeans(n_clusters=2, random_state=0).fit(x_test_encoded)
        neigh = KNeighborsClassifier(n_neighbors=2)
        
        nlabel = kmeans.predict(x_test_encoded)
        
        print(train_encode.shape,trainY.shape,x_test_encoded.shape)
        neigh.fit(train_encode,trainY.reshape((trainY.shape[0],)))
        pred=neigh.predict(x_test_encoded)

        count = 0
        cnt2 = 0
        for i in range(label.shape[0]):
            p1=x_test_encoded[i][0]
            p2=x_test_encoded[i][1]
            if label[i] == nlabel[i]:
                count+=1
            if label[i] == pred[i]:
                cnt2+=1
            '''if label[i]:
                plt.scatter(p1,p2,c='r')
            else:
                plt.scatter(p1,p2,c='b')'''
        
        '''title2 = title + ' ' + div
        plt.title(title2)
        plt.xlabel('latent dimension 1')
        plt.ylabel('latent dimension 2')
        
        print('labeled stress states')
        plt.show(0)
    
        title2 = title + ' ' + div
        plt.title(title2)
        plt.scatter(x_test_encoded[:,0],x_test_encoded[:,1], c=nlabel, s=65, cmap='viridis')
        print('clustered stress states')
        plt.show(1)'''
        
        print('clustering acc:',count/label.shape[0],'\n')
        clust_err += count/label.shape[0]

        '''for i in range(pred.shape[0]):
            p1=x_test_encoded[i][0]
            p2=x_test_encoded[i][1]
            if pred[i] == 1:
                plt.scatter(p1,p2,c='m')
            else:
                plt.scatter(p1,p2,c='c')'''
                
        '''title2 = title + ' ' + div
        plt.title(title2)
        plt.xlabel('latent dimension 1')
        plt.ylabel('latent dimension 2')
        plt.show(2)'''
        print('knn-clasified stress states')
        print('classification acc:',cnt2/label.shape[0],'\n')
        class_err += cnt2/label.shape[0]
        count3 +=1
trained using vae_loss

encoding the val set
data_shape: (65, 100) 

(130, 100) (130, 1) (65, 100)
clustering acc: 0.4153846153846154 

knn-clasified stress states
classification acc: 0.6615384615384615 

encoding the test set
data_shape: (40, 100) 

(130, 100) (130, 1) (40, 100)
clustering acc: 0.725 

knn-clasified stress states
classification acc: 0.65 

encoding the all set
data_shape: (235, 100) 

(130, 100) (130, 1) (235, 100)
clustering acc: 0.5914893617021276 

knn-clasified stress states
classification acc: 0.6851063829787234 


trained using binary_crossentropy
encoding the val set
data_shape: (65, 100) 

(130, 100) (130, 1) (65, 100)
clustering acc: 0.5846153846153846 

knn-clasified stress states
classification acc: 0.6923076923076923 

encoding the test set
data_shape: (40, 100) 

(130, 100) (130, 1) (40, 100)
clustering acc: 0.65 

knn-clasified stress states
classification acc: 0.65 

encoding the all set
data_shape: (235, 100) 

(130, 100) (130, 1) (235, 100)
clustering acc: 0.39148936170212767 

knn-clasified stress states
classification acc: 0.7531914893617021 


trained using mean_squared_error
encoding the val set
data_shape: (65, 100) 

(130, 100) (130, 1) (65, 100)
clustering acc: 0.6615384615384615 

knn-clasified stress states
classification acc: 0.7076923076923077 

encoding the test set
data_shape: (40, 100) 

(130, 100) (130, 1) (40, 100)
clustering acc: 0.65 

knn-clasified stress states
classification acc: 0.7 

encoding the all set
data_shape: (235, 100) 

(130, 100) (130, 1) (235, 100)
clustering acc: 0.6170212765957447 

knn-clasified stress states
classification acc: 0.7617021276595745 

In [69]:
class_err/9,clust_err/9
Out[69]:
(0.6957264957264957, 0.5873931623931624)

Average Clustering Accuracy over 12 Gridsearches (4 data splits and 3 loss functions)

In [70]:
print(clust_err/count3)
0.5873931623931624

Average Classification Accuracy over 12 Gridsearches (4 data splits and 3 loss functions)

In [71]:
print(class_err/count3)
0.6957264957264957

Project onto 3D Latent Space

In [73]:
batch_size = 20
signal_shape = (500, 1, 1)
latent_dim = 3
intermediate_dim = 5
epsilon_std = 1.0

x = Input(shape=signal_shape)

conv_1 = Conv2D(2,
                kernel_size=(40,1),activation='relu')(x)


flat = layers.Flatten()(conv_1)
hidden = Dense(intermediate_dim, activation='sigmoid')(flat)

z_mean = Dense(latent_dim)(hidden)
z_log_var = Dense(latent_dim)(hidden)

# note that "output_shape" isn't necessary with the TensorFlow backend
# so you could write `Lambda(sampling)([z_mean, z_log_var])`
z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])

# we instantiate these layers separately so as to reuse them later
decoder_hid = Dense(latent_dim, activation='relu')

intermed = Dense(intermediate_dim, activation='relu')
    
decoder_upsample = Dense(2 * 461 * 1, activation='relu')

output_shape = (batch_size, 461, 1, 2)
decoder_reshape = layers.Reshape(output_shape[1:])


decoder_mean = Conv2DTranspose(1,
                             kernel_size=(40, 1),
                             padding='valid',
                             activation='relu')


#decoder_mean_squash = layers.Reshape((500,1,1))

hid_decoded1 = decoder_hid(z)
hid_decoded = intermed(hid_decoded1)
up_decoded = decoder_upsample(hid_decoded)
reshape_decoded = decoder_reshape(up_decoded)
x_decoded_mean_squash=decoder_mean(reshape_decoded)

# instantiate VAE model
vae = Model(x, x_decoded_mean_squash)

vae = Model(x, x_decoded_mean_squash)
vae.compile(optimizer='rmsprop',loss=vae_loss)
vae.summary()
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_3 (InputLayer)            (None, 500, 1, 1)    0                                            
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 461, 1, 2)    82          input_3[0][0]                    
__________________________________________________________________________________________________
flatten_7 (Flatten)             (None, 922)          0           conv2d_3[0][0]                   
__________________________________________________________________________________________________
dense_18 (Dense)                (None, 5)            4615        flatten_7[0][0]                  
__________________________________________________________________________________________________
dense_19 (Dense)                (None, 3)            18          dense_18[0][0]                   
__________________________________________________________________________________________________
dense_20 (Dense)                (None, 3)            18          dense_18[0][0]                   
__________________________________________________________________________________________________
lambda_3 (Lambda)               (None, 3)            0           dense_19[0][0]                   
                                                                 dense_20[0][0]                   
__________________________________________________________________________________________________
dense_21 (Dense)                (None, 3)            12          lambda_3[0][0]                   
__________________________________________________________________________________________________
dense_22 (Dense)                (None, 5)            20          dense_21[0][0]                   
__________________________________________________________________________________________________
dense_23 (Dense)                (None, 922)          5532        dense_22[0][0]                   
__________________________________________________________________________________________________
reshape_3 (Reshape)             (None, 461, 1, 2)    0           dense_23[0][0]                   
__________________________________________________________________________________________________
conv2d_transpose_3 (Conv2DTrans (None, 500, 1, 1)    81          reshape_3[0][0]                  
==================================================================================================
Total params: 10,378
Trainable params: 10,378
Non-trainable params: 0
__________________________________________________________________________________________________
In [74]:
X_all.shape,Y_all.shape,valX.shape,trainX.shape,testX.shape
Out[74]:
((235, 500, 1, 1),
 (235, 1),
 (65, 500, 1, 1),
 (130, 500, 1, 1),
 (40, 500, 1, 1))
In [75]:
trainX,trainY,valX,valY,testX,testY,X_all,Y_all=shuffle_data(X_all,Y_all)
count3 = 0
clust_err=0
class_err=0
for i,loss in enumerate([vae_loss,'binary_crossentropy','mean_squared_error']):
    vae = Model(x, x_decoded_mean_squash)
    vae.compile(optimizer='rmsprop',loss=loss)
    vae.fit(trainX, trainX, epochs=5, batch_size=20)
    encoder = Model(x, z_mean)
    train_encode = encoder.predict(trainX, batch_size=20)
    
    if i == 0:
        title = 'vae_loss'
        print('\nvae_loss\n')
    else:
        title = str(loss)
        print('\n',str(loss))
    
    label =label.reshape(label.shape[0])
    x_test_encoded = encoder.predict(train, batch_size=20)
    for train, label,div in [(valX,valY,'val'),(testX,testY,'test'),(X_all,Y_all,'all')]:
        #Y_all =label.reshape(label.shape[0])
        stresses=[]
        relaxes=[]
        for i in range(len(list(label))):
            if label[i] == 1:
                stresses.append(x_test_encoded[i])
            else:
                relaxes.append(x_test_encoded[i])
        relaxes=np.array(relaxes)
        stresses=np.array(stresses)  
        
        print(div,'set')
        
        print('data_shape:',x_test_encoded.shape,'\n')

        kmeans = KMeans(n_clusters=2, random_state=0).fit(x_test_encoded)
        nlabel = kmeans.predict(x_test_encoded)
        
        neigh = KNeighborsClassifier(n_neighbors=2)
        neigh.fit(train_encode,trainY.reshape((trainY.shape[0],)))
        pred=neigh.predict(x_test_encoded)
        print(pred.shape,nlabel.shape,label.shape)
        
        count = 0
        cnt2 = 0
        for i in range(label.shape[0]):
            if label[i] == nlabel[i]:
                count+=1
            if label[i] == pred[i]:
                cnt2+=1

        a1=np.squeeze(np.asarray(stresses[:,0]))
        a2=np.squeeze(np.asarray(stresses[:,1]))
        a3=np.squeeze(np.asarray(stresses[:,2]))
        b1=np.squeeze(np.asarray(relaxes[:,0]))
        b2=np.squeeze(np.asarray(relaxes[:,1]))
        b3=np.squeeze(np.asarray(relaxes[:,2]))

        fig = plt.figure()
        ax = fig.add_subplot(111, projection='3d')
        ax.scatter(xs=np.real(a3), ys=np.real(a2), 
        zs=np.real(a1), zdir='z', s=20, c='r', depthshade=True)
        ax.scatter(xs=np.real(b3), ys=np.real(b2), 
        zs=np.real(b1), zdir='z', s=20, c='b', depthshade=True)
        r_patch = mpatches.Patch(color='red', label='stressed')
        b_patch = mpatches.Patch(color='blue', label='relaxed')
        plt.legend(handles=[r_patch,b_patch])
        fig.suptitle('labeled stress states', fontsize=20)

        plt.show()
        
        stresses2=[]
        relaxes2=[]
        for i in range(len(list(nlabel))):
            if nlabel[i] == 1:
                stresses2.append(x_test_encoded[i])
            else:
                relaxes2.append(x_test_encoded[i])
                
        relaxes2=np.array(relaxes2)
        stresses2=np.array(stresses2) 
        a1=np.squeeze(np.asarray(stresses2[:,0]))
        a2=np.squeeze(np.asarray(stresses2[:,1]))
        a3=np.squeeze(np.asarray(stresses2[:,2]))
        b1=np.squeeze(np.asarray(relaxes2[:,0]))
        b2=np.squeeze(np.asarray(relaxes2[:,1]))
        b3=np.squeeze(np.asarray(relaxes2[:,2]))

        fig = plt.figure()
        ax = fig.add_subplot(111, projection='3d')
        ax.scatter(xs=np.real(a3), ys=np.real(a2), 
        zs=np.real(a1), zdir='z', s=20, c='y', depthshade=True)
        ax.scatter(xs=np.real(b3), ys=np.real(b2), 
        zs=np.real(b1), zdir='z', s=20, c='k', depthshade=True)
        r_patch = mpatches.Patch(color='y', label='stressed')
        b_patch = mpatches.Patch(color='k', label='relaxed')
        plt.legend(handles=[r_patch,b_patch])
        fig.suptitle('clustered stress states', fontsize=20)
        plt.show()
        
        print('clustering acc:',count/label.shape[0],'\n')
        clust_err+=count/label.shape[0]
        
        
        stresses3=[]
        relaxes3=[]
        for i in range(len(list(nlabel))):
            if pred[i] == 1:
                stresses3.append(x_test_encoded[i])
            else:
                relaxes3.append(x_test_encoded[i])
        relaxes3=np.array(relaxes3)
        stresses3=np.array(stresses3) 
        a1=np.squeeze(np.asarray(stresses3[:,0]))
        a2=np.squeeze(np.asarray(stresses3[:,1]))
        a3=np.squeeze(np.asarray(stresses3[:,2]))
        b1=np.squeeze(np.asarray(relaxes3[:,0]))
        b2=np.squeeze(np.asarray(relaxes3[:,1]))
        b3=np.squeeze(np.asarray(relaxes3[:,2]))

        fig = plt.figure()
        ax = fig.add_subplot(111, projection='3d')
        ax.scatter(xs=np.real(a3), ys=np.real(a2), 
        zs=np.real(a1), zdir='z', s=20, c='m', depthshade=True)
        ax.scatter(xs=np.real(b3), ys=np.real(b2), 
        zs=np.real(b1), zdir='z', s=20, c='c', depthshade=True)
        r_patch = mpatches.Patch(color='m', label='stressed')
        b_patch = mpatches.Patch(color='c', label='relaxed')
        plt.legend(handles=[r_patch,b_patch])
        fig.suptitle('knn-classified stress states', fontsize=20)
        
        plt.show()
        print('classification acc:',cnt2/label.shape[0],'\n')
        class_err+=cnt2/label.shape[0]
        count3 +=1
Epoch 1/5
130/130 [==============================] - 2s 12ms/step - loss: 37.2646
Epoch 2/5
130/130 [==============================] - 0s 1ms/step - loss: 21.8679
Epoch 3/5
130/130 [==============================] - 0s 1ms/step - loss: 21.5340
Epoch 4/5
130/130 [==============================] - 0s 2ms/step - loss: 20.5627
Epoch 5/5
130/130 [==============================] - 0s 2ms/step - loss: 20.2841

vae_loss

val set
data_shape: (235, 3) 

(235,) (235,) (65, 1)
clustering acc: 0.5384615384615384 

classification acc: 0.5538461538461539 

test set
data_shape: (235, 3) 

(235,) (235,) (40, 1)
clustering acc: 0.625 

classification acc: 0.625 

all set
data_shape: (235, 3) 

(235,) (235,) (235, 1)
clustering acc: 0.5404255319148936 

classification acc: 0.5106382978723404 

Epoch 1/5
130/130 [==============================] - 1s 10ms/step - loss: 6.2394e-04
Epoch 2/5
130/130 [==============================] - 0s 1ms/step - loss: -0.0019
Epoch 3/5
130/130 [==============================] - 0s 1ms/step - loss: -0.0037
Epoch 4/5
130/130 [==============================] - 0s 1ms/step - loss: -0.0013
Epoch 5/5
130/130 [==============================] - 0s 1ms/step - loss: -0.0037

 binary_crossentropy
val set
data_shape: (235, 3) 

(235,) (235,) (65, 1)
clustering acc: 0.49230769230769234 

classification acc: 0.46153846153846156 

test set
data_shape: (235, 3) 

(235,) (235,) (40, 1)
clustering acc: 0.525 

classification acc: 0.65 

all set
data_shape: (235, 3) 

(235,) (235,) (235, 1)
clustering acc: 0.425531914893617 

classification acc: 0.7531914893617021 

Epoch 1/5
130/130 [==============================] - 1s 9ms/step - loss: 0.0137
Epoch 2/5
130/130 [==============================] - 0s 1ms/step - loss: 0.0136
Epoch 3/5
130/130 [==============================] - 0s 1ms/step - loss: 0.0136
Epoch 4/5
130/130 [==============================] - 0s 1ms/step - loss: 0.0135
Epoch 5/5
130/130 [==============================] - 0s 1ms/step - loss: 0.0135

 mean_squared_error
val set
data_shape: (235, 3) 

(235,) (235,) (65, 1)
clustering acc: 0.5692307692307692 

classification acc: 0.49230769230769234 

test set
data_shape: (235, 3) 

(235,) (235,) (40, 1)
clustering acc: 0.5 

classification acc: 0.6 

all set
data_shape: (235, 3) 

(235,) (235,) (235, 1)
clustering acc: 0.6085106382978723 

classification acc: 0.7361702127659574 

Average Clustering Accuracy over 12 Gridsearches (4 data splits and 3 loss functions)

In [76]:
print(clust_err/count3)
0.5360520094562647

Average Classification Accuracy over 12 Gridsearches (4 data splits and 3 loss functions)

In [77]:
print(class_err/count3)
0.5980769230769231
In [78]:
count3
Out[78]:
9

Interpretation of Latent Space projections

Observing the decision accuracies above in 2D and 3D latent spaces, our unsupervised kMeans method has inconsistent performance. We note that our points in latent space are too close to one another to be separated by a Euclidian distance-based clustering algorithm like kMeans. Moreover, the kMeans algorithm itself is performing local, greedy choices at each step without supervision, and is more likely to get stuck in local minima than a supervised classification algorithm with a more global view on the space of the decision problem. Perhaps hyperplanes in higher-dimensions might separate the data, making a technique like kMeans more helpful in this case.

However, our focus is to reduce the dimensionality of our dataset, and demonstrate that we can draw accurate decision boundaries in this reduced space. Why?

This helps us in two directions. 1) We are able to fight against the vanishing graident problem as our data's dimensionality is reduced, which might allow more compact NN models with fewer hidden layers to be trained on it. In essence, this procedure might project our data onto parameter spaces more suitable for the backpropogation algorithm. 2) Following on 1, models built to classify the data in latent space should take less computational resources. (We recall that training a MLP model is a more computationally exhaustive procedure than performing inference or predictions). This would enable the implementation of smaller NN models that can be trained remotely (likely via cloud), and run in realtime for online, anamoly detection purposes. In other words, - once its parameters are trained - the model would be able to classify the reduced-dimensionality data on a mobile, IoT device for stress (anamoly) detection purposes (for example, this is a similar type of framework - running image classifcation models trained in the cloud on an IoT device: https://aws.amazon.com/deeplens/).

Observing our classification accuracies with the kNN algo, we perform almost as well as our CNN in 500 dimensions, averaging well over 70% in our above outputs. We noted that using the sigmoid activation in the dense layer from the intermediate to latent dimension allowed our kNN classification model to separate more cleanly in the latent space.

Having achieved different accuracies on the classificaiton in data's original and latent space, we briefly explore our data through the lens of, not classification, but time-series. We look at predicting and reconstructing our EDA signals through ARIMA models, but this traditional method gets exaustive as a new model must be trained for each set of features (each 500-dim signal). We then explore the possibilities of reconstructing the signal using autencoder frameworks. Our intent in reconstructing our original signals for stressed and relaxed states is the goal of maintaining semantic value of the representations. After establishing a baseline reconstruction, we would be able to search for the maximum-sized latent space representation necessary to effectively reconstruct the signal and maintain semantic value. Our motivation behind this is enabling the accurate prediction of stressed/relaxed signals from a low-dimensional latent space. Combining successful signal reconstruction/classification from a low-dimensional latent space are steps in the direction of online, stress/stress anamoly detection.

Reconstructing Signal via ARIMA modeling

In [101]:
signal = X_all[np.where(Y_all == 0)[0]][0].reshape((500,))

time = 500 / 4
t3 = np.linspace(0, time, 500)

fig = plt.figure(1,figsize=(20, 5))
if 1:
    plt.plot(t3, signal,color = "b")
else:
    plt.plot(t3, signal,color = "b")
    
plt.show()

autocorrelation_plot(signal)
#plt.show()

# fit model
model = ARIMA(signal, order=(5,1,0))
model_fit = model.fit(disp=0)
model_fit.summary()
#print(model_fit.summary())

residuals = DataFrame(model_fit.resid)
residuals.plot()
plt.show()
residuals.plot(kind='kde')
plt.show()
residuals.describe()
#print(residuals.describe())

X = signal
size = int(len(X) * 0.66)
train, test = X[0:size], X[size:len(X)]
history = [x for x in train]
predictions = list()

for t in range(len(test)):
    model = ARIMA(history, order=(5,1,0))
    model_fit = model.fit(disp=0)
    output = model_fit.forecast()
    yhat = output[0]
    predictions.append(yhat)
    obs = test[t]
    history.append(obs)

error = mean_squared_error(test, predictions)
print('Test MSE: %.3f' % error)

plt.plot(test)
plt.plot(predictions, color='red')
plt.show()
/usr/local/lib/python3.6/site-packages/statsmodels/tsa/kalmanf/kalmanfilter.py:646: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  if issubdtype(paramsdtype, float):
/usr/local/lib/python3.6/site-packages/statsmodels/tsa/kalmanf/kalmanfilter.py:650: FutureWarning: Conversion of the second argument of issubdtype from `complex` to `np.complexfloating` is deprecated. In future, it will be treated as `np.complex128 == np.dtype(complex).type`.
  elif issubdtype(paramsdtype, complex):
/usr/local/lib/python3.6/site-packages/statsmodels/tsa/kalmanf/kalmanfilter.py:577: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  if issubdtype(paramsdtype, float):
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
Test MSE: 0.000
In [102]:
signal2 = X_all[np.where(Y_all == 1)[0]][0].reshape((500,))

time = 500 / 4
t3 = np.linspace(0, time, 500)

fig = plt.figure(1,figsize=(20, 5))
if 1:
    plt.plot(t3, signal2,color = "b")
else:
    plt.plot(t3, signal2,color = "b")
    
plt.show()

autocorrelation_plot(signal2)

model = ARIMA(signal2, order=(5,1,0))
model_fit = model.fit(disp=0)
#print(model_fit.summary())
model_fit.summary()

residuals = DataFrame(model_fit.resid)
residuals.plot()
plt.show()
residuals.plot(kind='kde')
plt.show()
#print(residuals.describe())
residuals.describe()


X = signal2
size = int(len(X) * 0.66)
train, test = X[0:size], X[size:len(X)]
history = [x for x in train]
predictions = list()

for t in range(len(test)):
    model = ARIMA(history, order=(5,1,0))
    model_fit = model.fit(disp=0)
    output = model_fit.forecast()
    yhat = output[0]
    predictions.append(yhat)
    obs = test[t]
    history.append(obs)

error = mean_squared_error(test, predictions)
print('Test MSE: %.3f' % error)

plt.plot(test)
plt.plot(predictions, color='red')
plt.show()
/usr/local/lib/python3.6/site-packages/statsmodels/tsa/kalmanf/kalmanfilter.py:646: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  if issubdtype(paramsdtype, float):
/usr/local/lib/python3.6/site-packages/statsmodels/tsa/kalmanf/kalmanfilter.py:650: FutureWarning: Conversion of the second argument of issubdtype from `complex` to `np.complexfloating` is deprecated. In future, it will be treated as `np.complex128 == np.dtype(complex).type`.
  elif issubdtype(paramsdtype, complex):
/usr/local/lib/python3.6/site-packages/statsmodels/tsa/kalmanf/kalmanfilter.py:577: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  if issubdtype(paramsdtype, float):
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
/usr/local/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
Test MSE: 0.000

Note the limitations of ARIMA modeling as a new model must be trained for each set of features (e.g. each 500-dim signal). While data could be generated nicely with ARIMA models as shown above by the filtered signal, it may be more computationally resourceful to attempt to generate/reconstruct/predict on batches of relaxed and stressed EDA signals at a time with NN/MLP architectures.

source - https://www.itl.nist.gov/div898/handbook/eda/section3/autocop3.htm

Focus on reconstructing with CNN architectures....ARIMA models too constrained

In [110]:
for i in range(X_all.shape[0]):
    time = 500 / 4
    t3 = np.linspace(0, time, 500)

    fig = plt.figure(1,figsize=(20, 5))
    if Y_all[i] == 1:
        plt.plot(t3, X_all[i].reshape((500,)),color = "r")
    else:
        plt.plot(t3, X_all[i].reshape((500,)),color = "b")

plt.title('all original examples')
plt.ylim((-2, 2))
plt.show()

Note from the original 500-d examples that, for the stressed examples, the variance of the amplitudes appears much greater in stressed signals.

Reconstruct using VAE with CNN architecture

In [111]:
batch_size = 20
signal_shape = (500, 1, 1)
latent_dim = 50    
intermediate_dim = 200
epsilon_std = 1.0
epochs = 10

x = Input(shape=signal_shape)
conv_1 = Conv2D(2,kernel_size=(40,1),activation='relu')(x)

flat = layers.Flatten()(conv_1)
hidden = Dense(intermediate_dim, activation='relu')(flat)

z_mean = Dense(latent_dim)(hidden)
z_log_var = Dense(latent_dim)(hidden)

# note that "output_shape" isn't necessary with the TensorFlow backend
# so you could write `Lambda(sampling)([z_mean, z_log_var])`
z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])

# we instantiate these layers separately so as to reuse them later
decoder_hid = Dense(latent_dim, activation='relu')
intermed = Dense(intermediate_dim, activation='relu')
decoder_upsample = Dense(2 * 461 * 1, activation='relu')
output_shape = (batch_size, 461, 1, 2)
decoder_reshape = layers.Reshape(output_shape[1:])

decoder_mean = Conv2DTranspose(1,
                                 kernel_size=(40, 1),
                                 padding='valid',
                                 activation='relu')

hid_decoded1 = decoder_hid(z)
hid_decoded = intermed(hid_decoded1)
up_decoded = decoder_upsample(hid_decoded)
reshape_decoded = decoder_reshape(up_decoded)
x_decoded_mean_squash=decoder_mean(reshape_decoded)

# instantiate VAE model
vae = Model(x, x_decoded_mean_squash)

vae = Model(x, x_decoded_mean_squash)
vae.compile(optimizer='rmsprop',loss='binary_crossentropy')
vae.summary()
vae.fit(trainX, trainX, epochs=20, batch_size=20)

decoder_input = Input(shape=(latent_dim,))
hid_decoded1 = decoder_hid(decoder_input)
hid_decoded = intermed(hid_decoded1)
up_decoded = decoder_upsample(hid_decoded)
reshape_decoded = decoder_reshape(up_decoded)
x_decoded_mean_squash=decoder_mean(reshape_decoded)
generator = Model(decoder_input, x_decoded_mean_squash)
encoder = Model(x, z_mean)
x_test_encoded = encoder.predict(X_all.reshape(235,500,1,1), batch_size=batch_size)    
x_decoded = generator.predict(x_test_encoded, batch_size=20)
x_decoded-=x_decoded.mean(0)

for i in range(5):
    time = 500 / 4
    t3 = np.linspace(0, time, 500)

    fig = plt.figure(1,figsize=(20, 5))
    if Y_all[i] == 1:
        plt.plot(t3, x_decoded[i].reshape((500,)),color = "r")
    else:
        plt.plot(t3, x_decoded[i].reshape((500,)),color = "b")

plt.title('(first 5) CNN VAE Reconstruction')   
plt.ylim((-2, 2))
plt.show()

for i in range(X_all.shape[0]):
    time = 500 / 4
    t3 = np.linspace(0, time, 500)

    fig = plt.figure(1,figsize=(20, 5))
    if Y_all[i] == 1:
        plt.plot(t3, x_decoded[i].reshape((500,)),color = "r")
    else:
        plt.plot(t3, x_decoded[i].reshape((500,)),color = "b")

plt.title('CNN VAE Reconstruction') 
plt.ylim((-2, 2))
plt.show()
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_9 (InputLayer)            (None, 500, 1, 1)    0                                            
__________________________________________________________________________________________________
conv2d_5 (Conv2D)               (None, 461, 1, 2)    82          input_9[0][0]                    
__________________________________________________________________________________________________
flatten_9 (Flatten)             (None, 922)          0           conv2d_5[0][0]                   
__________________________________________________________________________________________________
dense_37 (Dense)                (None, 200)          184600      flatten_9[0][0]                  
__________________________________________________________________________________________________
dense_38 (Dense)                (None, 50)           10050       dense_37[0][0]                   
__________________________________________________________________________________________________
dense_39 (Dense)                (None, 50)           10050       dense_37[0][0]                   
__________________________________________________________________________________________________
lambda_5 (Lambda)               (None, 50)           0           dense_38[0][0]                   
                                                                 dense_39[0][0]                   
__________________________________________________________________________________________________
dense_40 (Dense)                (None, 50)           2550        lambda_5[0][0]                   
__________________________________________________________________________________________________
dense_41 (Dense)                (None, 200)          10200       dense_40[0][0]                   
__________________________________________________________________________________________________
dense_42 (Dense)                (None, 922)          185322      dense_41[0][0]                   
__________________________________________________________________________________________________
reshape_5 (Reshape)             (None, 461, 1, 2)    0           dense_42[0][0]                   
__________________________________________________________________________________________________
conv2d_transpose_5 (Conv2DTrans (None, 500, 1, 1)    81          reshape_5[0][0]                  
==================================================================================================
Total params: 402,935
Trainable params: 402,935
Non-trainable params: 0
__________________________________________________________________________________________________

Note that while the reconstructions don't look natural, the variance of the amplitudes, for the stressed examples, appears greater still. Perhaps we are retaining semantic infomation in these transformed representations. In fact, the reconstructions appear to be denser than the original signal - perhaps we are filtering the signal into a more compact representation. Or the latent distribution we are reconstructing from is contained in a more compact hypervolume leading to more compact (in terms of amplitudes) signal reconstructions.

Reconstruct using VAE with standard MLP architecture

In [112]:
# this is the size of our encoded representations
encoding_dim = 300  # 32 floats -> compression of factor 24.5, assuming the input is 784 floats

# this is our input placeholder
input_img = Input(shape=(500,))
# "encoded" is the encoded representation of the input
encoded = Dense(encoding_dim, activation='relu')(input_img)
# "decoded" is the lossy reconstruction of the input
decoded = Dense(500, activation='sigmoid')(encoded)

# this model maps an input to its reconstruction
autoencoder = Model(input_img, decoded)

# this model maps an input to its encoded representation
encoder = Model(input_img, encoded)

# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input))

autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
autoencoder.summary()

autoencoder.fit(trainX.reshape((130,500)), trainX.reshape((130,500)),
                epochs=10,
                batch_size=50,
                shuffle=True,
                validation_data=(valX.reshape((valX.shape[0],500)), valX.reshape((valX.shape[0],500))))

# encode and decode some digits
# note that we take them from the *test* set
encoded_imgs = encoder.predict(X_all.reshape((X_all.shape[0],500)))
decoded_imgs = decoder.predict(encoded_imgs)

for i in range(decoded_imgs.shape[0]):
    time = 500 / 4
    t3 = np.linspace(0, time, 500)

    fig = plt.figure(1,figsize=(20, 5))
    if Y_all[i] == 1:
        plt.plot(t3, decoded_imgs[i].reshape((500,)),color = "r")
    else:
        plt.plot(t3, decoded_imgs[i].reshape((500,)),color = "b")

plt.title('Single-layered Dense AE Reconstruction') 
plt.ylim((-2, 2))
plt.show()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_10 (InputLayer)        (None, 500)               0         
_________________________________________________________________
dense_43 (Dense)             (None, 300)               150300    
_________________________________________________________________
dense_44 (Dense)             (None, 500)               150500    
=================================================================
Total params: 300,800
Trainable params: 300,800
Non-trainable params: 0
_________________________________________________________________

Again, note that while the reconstructions don't look natural, the variance of the amplitudes, for the stressed examples, appears greater. Also, we have the consistency of a more compact representation in the reconstructions - as described before.

Add more dense layers

In [114]:
# this is the size of our encoded representations
encoding_dim = 100  # 32 floats -> compression of factor 24.5, assuming the input is 784 floats

# this is our input placeholder
input_img = Input(shape=(500,))
# "encoded" is the encoded representation of the input
encoded = Dense(200, activation='relu')(input_img)
encoded1 = Dense(100, activation='relu')(encoded)
encoded2 = Dense(300, activation='relu')(encoded1)
# "decoded" is the lossy reconstruction of the input
decoded = Dense(500, activation='relu')(encoded2)

# this model maps an input to its reconstruction
autoencoder = Model(input_img, decoded)

encoder = Model(input_img, encoded)

autoencoder.compile(optimizer='adadelta', loss='mean_squared_error')
autoencoder.summary()

autoencoder.fit(trainX.reshape((130,500)), trainX.reshape((130,500)), epochs=10, batch_size=235, verbose=2, shuffle=True)
yhat2 = autoencoder.predict(X_all.reshape((235,500)), batch_size=235, verbose=0)
yhat2=yhat2-yhat2.mean(0)

for i in range(50):
    time = 500 / 4
    t3 = np.linspace(0, time, 500)

    fig = plt.figure(1,figsize=(20, 5))
    if Y_all[i] == 1:
        plt.plot(t3, X_all[i].reshape((500,)),color = "r")
    else:
        plt.plot(t3, X_all[i].reshape((500,)),color = "b")
        
plt.title('(first 50) Original')     
plt.ylim((-2, 2))
plt.show()

for i in range(50):#decoded_imgs.shape[0]):
    time = 500 / 4
    t3 = np.linspace(0, time, 500)

    fig = plt.figure(1,figsize=(20, 5))
    if Y_all[i] == 1:
        plt.plot(t3, yhat2[i].reshape((500,)),color = "r")
    else:
        plt.plot(t3, yhat2[i].reshape((500,)),color = "b")

plt.title('Multi-layered Dense AE') 
plt.ylim((-2, 2))
plt.show()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_12 (InputLayer)        (None, 500)               0         
_________________________________________________________________
dense_45 (Dense)             (None, 200)               100200    
_________________________________________________________________
dense_46 (Dense)             (None, 100)               20100     
_________________________________________________________________
dense_47 (Dense)             (None, 300)               30300     
_________________________________________________________________
dense_48 (Dense)             (None, 500)               150500    
=================================================================
Total params: 301,100
Trainable params: 301,100
Non-trainable params: 0
_________________________________________________________________

Note, again, that while the reconstructions don't look natural, the variance of the amplitudes, for the stressed examples, appears greater. Again, consistency in the compact reconstruction!

Reconstruct using LSTM architecture

The key observation in all the earlier reconstructions is that the the stress examples are all more varied than relaxed ones. Does this mean we are retaining semantic meaning, even if quantitatively the representation of the data changes?

For the purposes of more effective data reconstruction/augmentation/exploration, let us instead filter our data through lstm layers, maintaining dimensionality at each layer.

In [115]:
X_all=X_all.reshape((235, 500,1))
model = Sequential()
model.add(LSTM(1, batch_input_shape=(235,500,1), return_sequences=True, stateful=True))
model.add(TimeDistributed(Dense(1, activation='relu')))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc'])
model.summary()
model.fit(X_all, X_all, epochs=20, batch_size=235, verbose=2, shuffle=True)
yhat = model.predict(X_all.reshape((235,500,1)), batch_size=235, verbose=0)
yhat=yhat-yhat.mean(0)

for i in range(10):
    time = 500 / 4
    t3 = np.linspace(0, time, 500)

    fig = plt.figure(1,figsize=(20, 5))
    if Y_all[i] == 1:
        plt.plot(t3, X_all[i].reshape((500,)),color = "r")
    else:
        plt.plot(t3, X_all[i].reshape((500,)),color = "b")

plt.title('(First 10) Original Signal')
plt.ylim((-2, 2))
plt.show()

for i in range(10):#decoded_imgs.shape[0]):
    time = 500 / 4
    t3 = np.linspace(0, time, 500)

    fig = plt.figure(1,figsize=(20, 5))
    if Y_all[i] == 1:
        plt.plot(t3, yhat[i].reshape((500,)),color = "r")
    else:
        plt.plot(t3, yhat[i].reshape((500,)),color = "b")

plt.title('(First 10) Reconstruction with LSTM network')
plt.ylim((-2, 2))
plt.show()

for i in range(235):
    time = 500 / 4
    t3 = np.linspace(0, time, 500)

    fig = plt.figure(1,figsize=(20, 5))
    if Y_all[i] == 1:
        plt.plot(t3, X_all[i].reshape((500,)),color = "r")
    else:
        plt.plot(t3, X_all[i].reshape((500,)),color = "b")

plt.title('Original Signal')
plt.ylim((-2, 2))
plt.show()

for i in range(235):#decoded_imgs.shape[0]):
    time = 500 / 4
    t3 = np.linspace(0, time, 500)

    fig = plt.figure(1,figsize=(20, 5))
    if Y_all[i] == 1:
        plt.plot(t3, yhat[i].reshape((500,)),color = "r")
    else:
        plt.plot(t3, yhat[i].reshape((500,)),color = "b")

plt.title('Reconstruction with LSTM network')
plt.ylim((-2, 2))
plt.show()

The general patterns reconstructed above demonstrate more pronounced spiking in the stress data examples. While these represenations still don't match the original data closely, the difference in semantic information betweeen stressed and relaxed states is definitely pronounced. Moreover, while each individual reconstruction was not as good as when we filtered our inividual signal through the ARIMA model, we are able to reconstruct a much larger batch of signals more computationally efficiently with the LSTM Autencoder architecture. Moreover, notice the consistency in more compact reconstructions using multi-layer perceptron models. This might be a feature of this algorithms/dynamical systems as universal function approximaters - circularly, these MLP models can be thought of as analytical functions.

Note, we may discover that reconstruction is unnecessary for a potential real-time application. Perhaps, we just want to project the data on the latent space, and infer a class from their in realtime. Perhaps, we don't care about reconstruction the signal from a latent space. That said, the consistency of compact signal reconstructions using these models is probably a good sign for efficient IoT computation using wearables tracking EDA.

Still, how would our work in reconstruction the signal with autencoders help us? We could shift gears, and focus on using this technique to augment our limited dataset. Let's try this by combining our original and reconstructed signals into one set, and projecting onto the latent space in order to perform classification on our augmented data.

In [116]:
X_all=X_all.reshape((235, 500, 1))
X_all2 = np.vstack([X_all,yhat])#.reshape((470, 500, 1,1))
Y_all2 = np.vstack([Y_all,Y_all])
X_all2.shape,Y_all2.shape
Out[116]:
((470, 500, 1), (470, 1))
In [117]:
trainX,trainY,valX,valY,testX,testY,X_all,Y_all=shuffle_data(X_all2,Y_all2)
In [118]:
batch_size = 20
signal_shape = (500, 1, 1)

latent_dim = 3
intermediate_dim = 5
epsilon_std = 1.0

x = Input(shape=signal_shape)

conv_1 = Conv2D(2,
                kernel_size=(40,1),activation='relu')(x)


flat = layers.Flatten()(conv_1)
hidden = Dense(intermediate_dim, activation='sigmoid')(flat)

z_mean = Dense(latent_dim)(hidden)
z_log_var = Dense(latent_dim)(hidden)

# note that "output_shape" isn't necessary with the TensorFlow backend
# so you could write `Lambda(sampling)([z_mean, z_log_var])`
z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])

# we instantiate these layers separately so as to reuse them later
decoder_hid = Dense(latent_dim, activation='relu')

intermed = Dense(intermediate_dim, activation='relu')
    
decoder_upsample = Dense(2 * 461 * 1, activation='relu')

output_shape = (batch_size, 461, 1, 2)
decoder_reshape = layers.Reshape(output_shape[1:])


decoder_mean = Conv2DTranspose(1,
                             kernel_size=(40, 1),
                             padding='valid',
                             activation='relu')


#decoder_mean_squash = layers.Reshape((500,1,1))

hid_decoded1 = decoder_hid(z)
hid_decoded = intermed(hid_decoded1)
up_decoded = decoder_upsample(hid_decoded)
reshape_decoded = decoder_reshape(up_decoded)
x_decoded_mean_squash=decoder_mean(reshape_decoded)

# instantiate VAE model
vae = Model(x, x_decoded_mean_squash)

idx = np.random.permutation(X_all.shape[0])
X_all,Y_all = X_all[idx], Y_all[idx]

split1=0.5
s2=0.25
trainX,trainY = (X_all[:int(260*split1)],Y_all[:int(260*split1)])
valX,valY = (X_all[int(260*split1):int(260*split1)+int(260*s2)],Y_all[int(260*split1):int(260*split1)+int(260*s2)])
testX,testY = (X_all[int(260*split1)+int(260*s2):],Y_all[int(260*split1)+int(260*s2):])
print(trainX.shape,valX.shape,testX.shape,X_all.shape)

vae = Model(x, x_decoded_mean_squash)
vae.summary()
(130, 500, 1) (65, 500, 1) (275, 500, 1) (470, 500, 1)
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_13 (InputLayer)           (None, 500, 1, 1)    0                                            
__________________________________________________________________________________________________
conv2d_6 (Conv2D)               (None, 461, 1, 2)    82          input_13[0][0]                   
__________________________________________________________________________________________________
flatten_10 (Flatten)            (None, 922)          0           conv2d_6[0][0]                   
__________________________________________________________________________________________________
dense_49 (Dense)                (None, 5)            4615        flatten_10[0][0]                 
__________________________________________________________________________________________________
dense_50 (Dense)                (None, 3)            18          dense_49[0][0]                   
__________________________________________________________________________________________________
dense_51 (Dense)                (None, 3)            18          dense_49[0][0]                   
__________________________________________________________________________________________________
lambda_6 (Lambda)               (None, 3)            0           dense_50[0][0]                   
                                                                 dense_51[0][0]                   
__________________________________________________________________________________________________
dense_52 (Dense)                (None, 3)            12          lambda_6[0][0]                   
__________________________________________________________________________________________________
dense_53 (Dense)                (None, 5)            20          dense_52[0][0]                   
__________________________________________________________________________________________________
dense_54 (Dense)                (None, 922)          5532        dense_53[0][0]                   
__________________________________________________________________________________________________
reshape_6 (Reshape)             (None, 461, 1, 2)    0           dense_54[0][0]                   
__________________________________________________________________________________________________
conv2d_transpose_6 (Conv2DTrans (None, 500, 1, 1)    81          reshape_6[0][0]                  
==================================================================================================
Total params: 10,378
Trainable params: 10,378
Non-trainable params: 0
__________________________________________________________________________________________________
In [122]:
trainX,trainY,valX,valY,testX,testY,X_all,Y_all=shuffle_data(X_all,Y_all)

clust_err = 0
class_err = 0
count3 = 0
for i,loss in enumerate([vae_loss,'binary_crossentropy','mean_squared_error']):    
    if i == 0:
        title = 'vae_loss'
        print('\ntrained using vae_loss\n')
    else:
        title = str(loss)
        print('\ntrained using',str(loss))
    vae = Model(x, x_decoded_mean_squash)
    vae.compile(optimizer='rmsprop',loss=loss)
    vae.fit(trainX.reshape((130,500,1,1)), trainX.reshape((130,500,1,1)), epochs=5, batch_size=20,verbose=False)
    encoder = Model(x, z_mean)
    train_encode = encoder.predict(trainX.reshape((130,500,1,1)), batch_size=20)
    for train, label,div in [(valX,valY,'val'),(testX,testY,'test'),(X_all,Y_all,'all')]:
        print('encoding the',div,'set')
        label =label.reshape(label.shape[0])
        x_test_encoded = encoder.predict(train.reshape((train.shape[0],500,1,1)), batch_size=20)
        print('data_shape:',x_test_encoded.shape,'\n')

        kmeans = KMeans(n_clusters=2, random_state=0).fit(x_test_encoded)
        neigh = KNeighborsClassifier(n_neighbors=2)
        
        nlabel = kmeans.predict(x_test_encoded)
        
        print(train_encode.shape,trainY.shape,x_test_encoded.shape)
        neigh.fit(train_encode,trainY.reshape((trainY.shape[0],)))
        pred=neigh.predict(x_test_encoded)

        count = 0
        cnt2 = 0
        for i in range(label.shape[0]):
            p1=x_test_encoded[i][0]
            p2=x_test_encoded[i][1]
            if label[i] == nlabel[i]:
                count+=1
            if label[i] == pred[i]:
                cnt2+=1
            if label[i]:
                plt.scatter(p1,p2,c='r')
            else:
                plt.scatter(p1,p2,c='b')
        
        title2 = title + ' ' + div
        plt.title(title2)
        plt.xlabel('latent dimension 1')
        plt.ylabel('latent dimension 2')
        
        print('labeled stress states')
        plt.show(0)
    
        title2 = title + ' ' + div
        plt.title(title2)
        plt.scatter(x_test_encoded[:,0],x_test_encoded[:,1], c=nlabel, s=65, cmap='viridis')
        print('clustered stress states')
        plt.show(1)
        
        print('clustering acc:',count/label.shape[0],'\n')
        clust_err += count/label.shape[0]

        for i in range(pred.shape[0]):
            p1=x_test_encoded[i][0]
            p2=x_test_encoded[i][1]
            if pred[i] == 1:
                plt.scatter(p1,p2,c='m')
            else:
                plt.scatter(p1,p2,c='c')
                
        title2 = title + ' ' + div
        plt.title(title2)
        plt.xlabel('latent dimension 1')
        plt.ylabel('latent dimension 2')
        plt.show(2)
        print('knn-clasified stress states')
        print('classification acc:',cnt2/label.shape[0],'\n')
        class_err += cnt2/label.shape[0]
        count3 +=1
trained using vae_loss

encoding the val set
data_shape: (65, 3) 

(130, 3) (130, 1) (65, 3)
labeled stress states
clustered stress states
clustering acc: 0.6307692307692307 

knn-clasified stress states
classification acc: 0.676923076923077 

encoding the test set
data_shape: (275, 3) 

(130, 3) (130, 1) (275, 3)
labeled stress states
clustered stress states
clustering acc: 0.39636363636363636 

knn-clasified stress states
classification acc: 0.6436363636363637 

encoding the all set
data_shape: (470, 3) 

(130, 3) (130, 1) (470, 3)
labeled stress states
clustered stress states
clustering acc: 0.6276595744680851 

knn-clasified stress states
classification acc: 0.6893617021276596 


trained using binary_crossentropy
encoding the val set
data_shape: (65, 3) 

(130, 3) (130, 1) (65, 3)
labeled stress states
clustered stress states
clustering acc: 0.6307692307692307 

knn-clasified stress states
classification acc: 0.676923076923077 

encoding the test set
data_shape: (275, 3) 

(130, 3) (130, 1) (275, 3)
labeled stress states
clustered stress states
clustering acc: 0.39636363636363636 

knn-clasified stress states
classification acc: 0.6436363636363637 

encoding the all set
data_shape: (470, 3) 

(130, 3) (130, 1) (470, 3)
labeled stress states
clustered stress states
clustering acc: 0.6276595744680851 

knn-clasified stress states
classification acc: 0.6893617021276596 


trained using mean_squared_error
encoding the val set
data_shape: (65, 3) 

(130, 3) (130, 1) (65, 3)
labeled stress states
clustered stress states
clustering acc: 0.6307692307692307 

knn-clasified stress states
classification acc: 0.676923076923077 

encoding the test set
data_shape: (275, 3) 

(130, 3) (130, 1) (275, 3)
labeled stress states
clustered stress states
clustering acc: 0.39636363636363636 

knn-clasified stress states
classification acc: 0.6436363636363637 

encoding the all set
data_shape: (470, 3) 

(130, 3) (130, 1) (470, 3)
labeled stress states
clustered stress states
clustering acc: 0.6276595744680851 

knn-clasified stress states
classification acc: 0.6893617021276596 

In [123]:
class_err/count3,clust_err/count3
Out[123]:
(0.6699737142290334, 0.5515974805336508)
In [124]:
count3=0
clust_err=0
class_err=0
for i,loss in enumerate([vae_loss,'binary_crossentropy','mean_squared_error']):
    vae = Model(x, x_decoded_mean_squash)
    vae.compile(optimizer='rmsprop',loss=loss)
    vae.fit(trainX.reshape((130,500,1,1)), trainX.reshape((130,500,1,1)), epochs=5, batch_size=20)
    
    if i == 0:
        title = 'vae_loss'
        print('\nvae_loss\n')
    else:
        title = str(loss)
        print('\n',str(loss))

    for train, label,div in [(X_all2.reshape((470,500,1,1)),Y_all2,'all')]:
        print(div,'set')
        label =label.reshape(label.shape[0])
        encoder = Model(x, z_mean)
        x_test_encoded = encoder.predict(train, batch_size=20)
        print('data_shape:',x_test_encoded.shape,'\n')

        stresses=[]
        relaxes=[]
        for i in range(len(list(label))):
            if label[i] == 1:
                stresses.append(x_test_encoded[i])
            else:
                relaxes.append(x_test_encoded[i])
        relaxes=np.array(relaxes)
        stresses=np.array(stresses)  

        kmeans = KMeans(n_clusters=2, random_state=0).fit(x_test_encoded)
        neigh = KNeighborsClassifier(n_neighbors=2)
        
        nlabel = kmeans.predict(x_test_encoded)

        neigh.fit(x_test_encoded,label)
        pred=neigh.predict(x_test_encoded)
        
        count = 0
        cnt2 = 0
        for i in range(label.shape[0]):
            if label[i] == nlabel[i]:
                count+=1
            if label[i] == pred[i]:
                cnt2+=1

        a1=np.squeeze(np.asarray(stresses[:,0]))
        a2=np.squeeze(np.asarray(stresses[:,1]))
        a3=np.squeeze(np.asarray(stresses[:,2]))
        b1=np.squeeze(np.asarray(relaxes[:,0]))
        b2=np.squeeze(np.asarray(relaxes[:,1]))
        b3=np.squeeze(np.asarray(relaxes[:,2]))

        fig = plt.figure()
        ax = fig.add_subplot(111, projection='3d')
        ax.scatter(xs=np.real(a3), ys=np.real(a2), 
        zs=np.real(a1), zdir='z', s=20, c='r', depthshade=True)
        ax.scatter(xs=np.real(b3), ys=np.real(b2), 
        zs=np.real(b1), zdir='z', s=20, c='b', depthshade=True)
        r_patch = mpatches.Patch(color='red', label='stressed')
        b_patch = mpatches.Patch(color='blue', label='relaxed')
        plt.legend(handles=[r_patch,b_patch])
        fig.suptitle('labeled stress states', fontsize=20)

        plt.show()
        
        stresses2=[]
        relaxes2=[]
        for i in range(len(list(nlabel))):
            if nlabel[i] == 1:
                stresses2.append(x_test_encoded[i])
            else:
                relaxes2.append(x_test_encoded[i])
        relaxes2=np.array(relaxes2)
        stresses2=np.array(stresses2) 
        a1=np.squeeze(np.asarray(stresses2[:,0]))
        a2=np.squeeze(np.asarray(stresses2[:,1]))
        a3=np.squeeze(np.asarray(stresses2[:,2]))
        b1=np.squeeze(np.asarray(relaxes2[:,0]))
        b2=np.squeeze(np.asarray(relaxes2[:,1]))
        b3=np.squeeze(np.asarray(relaxes2[:,2]))

        fig = plt.figure()
        ax = fig.add_subplot(111, projection='3d')
        ax.scatter(xs=np.real(a3), ys=np.real(a2), 
        zs=np.real(a1), zdir='z', s=20, c='y', depthshade=True)
        ax.scatter(xs=np.real(b3), ys=np.real(b2), 
        zs=np.real(b1), zdir='z', s=20, c='k', depthshade=True)
        r_patch = mpatches.Patch(color='y', label='stressed')
        b_patch = mpatches.Patch(color='k', label='relaxed')
        plt.legend(handles=[r_patch,b_patch])
        fig.suptitle('clustered stress states', fontsize=20)
        plt.show()
        
        print('clustering acc:',count/label.shape[0],'\n')
        clust_err +=  count/label.shape[0]
        
        stresses3=[]
        relaxes3=[]
        for i in range(len(list(nlabel))):
            if pred[i] == 1:
                stresses3.append(x_test_encoded[i])
            else:
                relaxes3.append(x_test_encoded[i])
        relaxes3=np.array(relaxes3)
        stresses3=np.array(stresses3) 
        a1=np.squeeze(np.asarray(stresses3[:,0]))
        a2=np.squeeze(np.asarray(stresses3[:,1]))
        a3=np.squeeze(np.asarray(stresses3[:,2]))
        b1=np.squeeze(np.asarray(relaxes3[:,0]))
        b2=np.squeeze(np.asarray(relaxes3[:,1]))
        b3=np.squeeze(np.asarray(relaxes3[:,2]))

        fig = plt.figure()
        ax = fig.add_subplot(111, projection='3d')
        ax.scatter(xs=np.real(a3), ys=np.real(a2), 
        zs=np.real(a1), zdir='z', s=20, c='m', depthshade=True)
        ax.scatter(xs=np.real(b3), ys=np.real(b2), 
        zs=np.real(b1), zdir='z', s=20, c='c', depthshade=True)
        r_patch = mpatches.Patch(color='m', label='stressed')
        b_patch = mpatches.Patch(color='c', label='relaxed')
        plt.legend(handles=[r_patch,b_patch])
        fig.suptitle('knn-classified stress states', fontsize=20)
        
        plt.show()
        print('classification acc:',cnt2/label.shape[0],'\n')
        class_err +=  cnt2/label.shape[0]
        count3+=1
Epoch 1/5
130/130 [==============================] - 2s 14ms/step - loss: 18.7792
Epoch 2/5
130/130 [==============================] - 0s 1ms/step - loss: 18.7776
Epoch 3/5
130/130 [==============================] - 0s 1ms/step - loss: 18.7775
Epoch 4/5
130/130 [==============================] - 0s 1ms/step - loss: 18.7775
Epoch 5/5
130/130 [==============================] - 0s 1ms/step - loss: 18.7775

vae_loss

all set
data_shape: (470, 3) 

clustering acc: 0.6106382978723405 

classification acc: 0.7808510638297872 

Epoch 1/5
130/130 [==============================] - 2s 16ms/step - loss: -0.0235
Epoch 2/5
130/130 [==============================] - 0s 1ms/step - loss: -0.0235
Epoch 3/5
130/130 [==============================] - 0s 1ms/step - loss: -0.0235
Epoch 4/5
130/130 [==============================] - 0s 1ms/step - loss: -0.0235
Epoch 5/5
130/130 [==============================] - 0s 1ms/step - loss: -0.0235

 binary_crossentropy
all set
data_shape: (470, 3) 

clustering acc: 0.6106382978723405 

classification acc: 0.7808510638297872 

Epoch 1/5
130/130 [==============================] - 2s 16ms/step - loss: 0.0099
Epoch 2/5
130/130 [==============================] - 0s 1ms/step - loss: 0.0099
Epoch 3/5
130/130 [==============================] - 0s 1ms/step - loss: 0.0099
Epoch 4/5
130/130 [==============================] - 0s 2ms/step - loss: 0.0099
Epoch 5/5
130/130 [==============================] - 0s 2ms/step - loss: 0.0099

 mean_squared_error
all set
data_shape: (470, 3) 

clustering acc: 0.6106382978723405 

classification acc: 0.7808510638297872 

Average Clustering Accuracy over 12 Gridsearches (4 data splits and 3 loss functions)

In [125]:
print(clust_err/count3)
0.6106382978723405

Average Clustering Accuracy over 12 Gridsearches (4 data splits and 3 loss functions)

In [126]:
print(class_err/count3)
0.7808510638297873

Conclusion

Data Augmentation - Next Steps

For our next steps, we consider data augmentation techniques and further model parametrization. For a given architecture, its impossible to explore all the parameters, and therefore it might not be ideal to focus on hyperparaemeter optimaztion. We also notice that we were able to accurately classify stressed and relaxed signals in our 2D and 3D latent space with over 70 percent accuracy, which Instead, and also factoring in concerns of model generalizability, it may be more worthwhile to focus on data augmentation.

Our original approach in augmenting our dataset, when we only had 10 examples, was to add zero mean Gaussian noise to each data example. However, we quickly found that our models overfit these redundant input patterns. While this approach alone may not succeed in augmenting the data appropriately, it is similar to the jittering technique (adding noise) employed as one of many data augmentation techniques here for wearable, time-series data: https://arxiv.org/pdf/1706.00527.pdf. This reference outlines additional techniques for general time-series augmentation applicable to our dataset that we could use in the future, and are heavily inspired from image dataset augmentation techniques. Examples include window slicing (cropping), image warping (window warping - locally disrupting subsets of points within the t-series), global scaling, and global rotation. Some of the same/simalr techniques (data warping, slicing, mixing) are described here: https://aaltd16.irisa.fr/files/2016/08/AALTD16_paper_9.pdf. Moreover, in terms of generating additive noise, we might consider exploring the effectiveness of adding noise from a family of zero-centered Gaussians (or another distribution) to random subset of stressed/relaxed signals. Finally, we also considered encoding the time-series as images, and passing this image as input into a 2D CNN architecture. This approach appeared impractible, however, when we could more computationally efficiently process our EDA signals with a 1D architecture. However, this technique might be effective if we transform the images (raw time-series encodings) into Gramian Angular Field and Markov Transition Field images, and only then learning high-level features from these complex image encodings via a "tiled" CNN architecture as described here - https://pdfs.semanticscholar.org/32e7/b2ddc781b571fa023c205753a803565543e7.pdf.

Looking forward, it may be beneficial to explore a metasearch over an ensemble of the techniques employed above. While this is a broad scope, we may limit our scope by readjusting our labels. For example, we might focus on extreme stressed states such as dehydration, fatigue, etc, and have these states labeled as stressed and all other states as non-stressed. This approach might constrain the size of our decision problem, and allow us to leverage data augmentation techniques and generative modeling to be able to perform online prediction of extremely stressed states (via compact NN model trained remotely and implemented on IoT device).

Binary Classification Techniques

Below gives an overview of what we found using 3 different classification techniques.

Logistic Regression Test Set Accuracy:

Length 100 - 84.7%

Length 500 - 78%

Decision Tree Test Set Accuracy:

Length 100 - 71.2%

Length 500 - 78.4%

CNN Test Set Accuracy:

Length 100 [Best model] - 87.5%

Length 500 [Best model] - 83.7%

As we hypothesized, the Convolutional Neural Network performed the best on the test set, with 87.5% accuracy. We were very pleased by this score, due to the fact that recent literature on state-of-the-art stress detection gives accuracy from EDA signals in that range. Instead of extracting features, we approached the problem in a different way by directly looking at the time-series signal.

One of the things we did not do but could do to further test the CNN to detect stress is cross-validate on every subject by fitting the model 16 times, leaving out a different subject each time, and averaging the accuracy. Instead, we just tested on the same two subjects each time.

However, one of the biggest take-aways from this project was the fact that the CNN filters were able to pick up the SCR shapes, as we expected.

For example, this is just one of our data "stressed" segments and an example of a filter that was created by the CNN:

In [127]:
plt.show()
plt.figure(figsize= (15,5))
plt.title('Segmented Stress Example')
plt.plot(X_train2[108,:])

print('Example of a Filter from CNN')
Example of a Filter from CNN

Example of a Filter from one of our CNNs fileterrr.png

This demonstrates that our CNN is learning the shape of the SCR.

Resources:

Data

[https://www.utdallas.edu/~nourani/Bioinformatics/Biosensor_Data/]

Bibliography:

[Seta et al. 2010] Seta, Cornelia, et al. “Discriminating Stress From Cognitive Load Using a Wearable EDA Device - IEEE Journals & Magazine.” Design and Implementation of Autonomous Vehicle Valet Parking System - IEEE Conference Publication, Wiley-IEEE Press, Mar. 2010, ieeexplore.ieee.org/document/5325784/.

[Cho et al. 2017] Cho, Dongrae. “Detection of Stress Levels from Biosignals Measured in Virtual Reality Environments Using a Kernel-Based Extreme Learning Machine.” Sensors, 2017, www.readbyqxmd.com/read/29064457/detection-of-stress-levels-from-biosignals-measured-in-virtual-reality-environments-using-a-kernel-based-extreme-learning-machine.

[Alexandratos 2014] Alexandratos, Vsileios. “Mobile Real-Time Stress Detection.” Delft University of Technology, 2014, repository.tudelft.nl/islandora/object/uuid:c3e56b27-97ff-459b-9f85-dc05f8e3c088/datastream/OBJ.

Assistance with Coding:

1D CNN Help: https://keras.io/layers/convolutional/

For Decision Trees: http://dataaspirant.com/2017/02/01/decision-tree-algorithm-python-with-scikit-learn/

For Logistic Regression: https://towardsdatascience.com/logistic-regression-using-python-sklearn-numpy-mnist-handwriting-recognition-matplotlib-a6b31e2b166a

ARIMA model: https://machinelearningmastery.com/arima-for-time-series-forecasting-with-python/

CNN on Time-Series References

https://arxiv.org/pdf/1610.01683.pdf

https://arxiv.org/pdf/1611.06455.pdf